00:00:00.002 Started by upstream project "autotest-nightly-lts" build number 2035 00:00:00.002 originally caused by: 00:00:00.003 Started by upstream project "nightly-trigger" build number 3295 00:00:00.003 originally caused by: 00:00:00.003 Started by timer 00:00:00.003 Started by timer 00:00:00.085 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.086 The recommended git tool is: git 00:00:00.086 using credential 00000000-0000-0000-0000-000000000002 00:00:00.088 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/crypto-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.142 Fetching changes from the remote Git repository 00:00:00.144 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.207 Using shallow fetch with depth 1 00:00:00.207 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.207 > git --version # timeout=10 00:00:00.263 > git --version # 'git version 2.39.2' 00:00:00.263 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.298 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.298 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.366 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.378 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.390 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:07.390 > git config core.sparsecheckout # timeout=10 00:00:07.399 > git read-tree -mu HEAD # timeout=10 00:00:07.415 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:07.434 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:07.434 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:07.525 [Pipeline] Start of Pipeline 00:00:07.536 [Pipeline] library 00:00:07.537 Loading library shm_lib@master 00:00:07.537 Library shm_lib@master is cached. Copying from home. 00:00:07.547 [Pipeline] node 00:00:07.568 Running on WFP51 in /var/jenkins/workspace/crypto-phy-autotest 00:00:07.569 [Pipeline] { 00:00:07.576 [Pipeline] catchError 00:00:07.577 [Pipeline] { 00:00:07.591 [Pipeline] wrap 00:00:07.602 [Pipeline] { 00:00:07.610 [Pipeline] stage 00:00:07.612 [Pipeline] { (Prologue) 00:00:07.808 [Pipeline] sh 00:00:08.092 + logger -p user.info -t JENKINS-CI 00:00:08.177 [Pipeline] echo 00:00:08.179 Node: WFP51 00:00:08.189 [Pipeline] sh 00:00:08.492 [Pipeline] setCustomBuildProperty 00:00:08.502 [Pipeline] echo 00:00:08.503 Cleanup processes 00:00:08.506 [Pipeline] sh 00:00:08.788 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.788 1083016 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:08.803 [Pipeline] sh 00:00:09.130 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:09.131 ++ grep -v 'sudo pgrep' 00:00:09.131 ++ awk '{print $1}' 00:00:09.131 + sudo kill -9 00:00:09.131 + true 00:00:09.142 [Pipeline] cleanWs 00:00:09.151 [WS-CLEANUP] Deleting project workspace... 00:00:09.151 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.156 [WS-CLEANUP] done 00:00:09.161 [Pipeline] setCustomBuildProperty 00:00:09.174 [Pipeline] sh 00:00:09.456 + sudo git config --global --replace-all safe.directory '*' 00:00:09.540 [Pipeline] httpRequest 00:00:09.571 [Pipeline] echo 00:00:09.573 Sorcerer 10.211.164.101 is alive 00:00:09.581 [Pipeline] httpRequest 00:00:09.585 HttpMethod: GET 00:00:09.585 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:09.586 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:09.603 Response Code: HTTP/1.1 200 OK 00:00:09.604 Success: Status code 200 is in the accepted range: 200,404 00:00:09.604 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:12.600 [Pipeline] sh 00:00:12.882 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:12.898 [Pipeline] httpRequest 00:00:12.925 [Pipeline] echo 00:00:12.926 Sorcerer 10.211.164.101 is alive 00:00:12.932 [Pipeline] httpRequest 00:00:12.936 HttpMethod: GET 00:00:12.936 URL: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:12.937 Sending request to url: http://10.211.164.101/packages/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:12.945 Response Code: HTTP/1.1 200 OK 00:00:12.946 Success: Status code 200 is in the accepted range: 200,404 00:00:12.946 Saving response body to /var/jenkins/workspace/crypto-phy-autotest/spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:45.725 [Pipeline] sh 00:00:46.007 + tar --no-same-owner -xf spdk_dbef7efacb6f3438cd0fe1344a67946669fb1419.tar.gz 00:00:48.555 [Pipeline] sh 00:00:48.839 + git -C spdk log --oneline -n5 00:00:48.839 dbef7efac test: fix dpdk builds on ubuntu24 00:00:48.839 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:00:48.839 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:00:48.839 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:00:48.839 e03c164a1 nvme: add nvme_ctrlr_lock 00:00:48.852 [Pipeline] } 00:00:48.869 [Pipeline] // stage 00:00:48.878 [Pipeline] stage 00:00:48.881 [Pipeline] { (Prepare) 00:00:48.902 [Pipeline] writeFile 00:00:48.917 [Pipeline] sh 00:00:49.199 + logger -p user.info -t JENKINS-CI 00:00:49.209 [Pipeline] sh 00:00:49.489 + logger -p user.info -t JENKINS-CI 00:00:49.505 [Pipeline] sh 00:00:49.824 + cat autorun-spdk.conf 00:00:49.825 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:49.825 SPDK_TEST_BLOCKDEV=1 00:00:49.825 SPDK_TEST_ISAL=1 00:00:49.825 SPDK_TEST_CRYPTO=1 00:00:49.825 SPDK_TEST_REDUCE=1 00:00:49.825 SPDK_TEST_VBDEV_COMPRESS=1 00:00:49.825 SPDK_RUN_UBSAN=1 00:00:49.832 RUN_NIGHTLY=1 00:00:49.838 [Pipeline] readFile 00:00:49.870 [Pipeline] withEnv 00:00:49.873 [Pipeline] { 00:00:49.889 [Pipeline] sh 00:00:50.172 + set -ex 00:00:50.173 + [[ -f /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf ]] 00:00:50.173 + source /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:50.173 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:50.173 ++ SPDK_TEST_BLOCKDEV=1 00:00:50.173 ++ SPDK_TEST_ISAL=1 00:00:50.173 ++ SPDK_TEST_CRYPTO=1 00:00:50.173 ++ SPDK_TEST_REDUCE=1 00:00:50.173 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:50.173 ++ SPDK_RUN_UBSAN=1 00:00:50.173 ++ RUN_NIGHTLY=1 00:00:50.173 + case $SPDK_TEST_NVMF_NICS in 00:00:50.173 + DRIVERS= 00:00:50.173 + [[ -n '' ]] 00:00:50.173 + exit 0 00:00:50.184 [Pipeline] } 00:00:50.209 [Pipeline] // withEnv 00:00:50.216 [Pipeline] } 00:00:50.236 [Pipeline] // stage 00:00:50.247 [Pipeline] catchError 00:00:50.249 [Pipeline] { 00:00:50.267 [Pipeline] timeout 00:00:50.267 Timeout set to expire in 1 hr 0 min 00:00:50.269 [Pipeline] { 00:00:50.283 [Pipeline] stage 00:00:50.284 [Pipeline] { (Tests) 00:00:50.298 [Pipeline] sh 00:00:50.579 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/crypto-phy-autotest 00:00:50.579 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest 00:00:50.579 + DIR_ROOT=/var/jenkins/workspace/crypto-phy-autotest 00:00:50.579 + [[ -n /var/jenkins/workspace/crypto-phy-autotest ]] 00:00:50.579 + DIR_SPDK=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:50.579 + DIR_OUTPUT=/var/jenkins/workspace/crypto-phy-autotest/output 00:00:50.579 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/spdk ]] 00:00:50.579 + [[ ! -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:50.579 + mkdir -p /var/jenkins/workspace/crypto-phy-autotest/output 00:00:50.579 + [[ -d /var/jenkins/workspace/crypto-phy-autotest/output ]] 00:00:50.579 + [[ crypto-phy-autotest == pkgdep-* ]] 00:00:50.579 + cd /var/jenkins/workspace/crypto-phy-autotest 00:00:50.579 + source /etc/os-release 00:00:50.579 ++ NAME='Fedora Linux' 00:00:50.579 ++ VERSION='38 (Cloud Edition)' 00:00:50.579 ++ ID=fedora 00:00:50.579 ++ VERSION_ID=38 00:00:50.579 ++ VERSION_CODENAME= 00:00:50.579 ++ PLATFORM_ID=platform:f38 00:00:50.579 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:50.579 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:50.579 ++ LOGO=fedora-logo-icon 00:00:50.579 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:50.579 ++ HOME_URL=https://fedoraproject.org/ 00:00:50.579 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:50.579 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:50.579 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:50.579 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:50.579 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:50.579 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:50.579 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:50.579 ++ SUPPORT_END=2024-05-14 00:00:50.579 ++ VARIANT='Cloud Edition' 00:00:50.579 ++ VARIANT_ID=cloud 00:00:50.579 + uname -a 00:00:50.579 Linux spdk-wfp-51 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:50.579 + sudo /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:00:53.992 Hugepages 00:00:53.992 node hugesize free / total 00:00:53.992 node0 1048576kB 0 / 0 00:00:53.992 node0 2048kB 0 / 0 00:00:53.992 node1 1048576kB 0 / 0 00:00:53.992 node1 2048kB 0 / 0 00:00:53.992 00:00:53.992 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:53.992 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:53.992 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:53.992 NVMe 0000:5e:00.0 8086 0b60 0 nvme nvme0 nvme0n1 00:00:53.992 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:53.992 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:53.992 VMD 0000:85:05.5 8086 201d 1 - - - 00:00:53.992 VMD 0000:ae:05.5 8086 201d 1 - - - 00:00:53.992 + rm -f /tmp/spdk-ld-path 00:00:53.992 + source autorun-spdk.conf 00:00:53.992 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.992 ++ SPDK_TEST_BLOCKDEV=1 00:00:53.992 ++ SPDK_TEST_ISAL=1 00:00:53.992 ++ SPDK_TEST_CRYPTO=1 00:00:53.992 ++ SPDK_TEST_REDUCE=1 00:00:53.992 ++ SPDK_TEST_VBDEV_COMPRESS=1 00:00:53.992 ++ SPDK_RUN_UBSAN=1 00:00:53.992 ++ RUN_NIGHTLY=1 00:00:53.992 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:53.992 + [[ -n '' ]] 00:00:53.992 + sudo git config --global --add safe.directory /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:53.992 + for M in /var/spdk/build-*-manifest.txt 00:00:53.992 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:53.992 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:53.992 + for M in /var/spdk/build-*-manifest.txt 00:00:53.992 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:53.992 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/crypto-phy-autotest/output/ 00:00:53.992 ++ uname 00:00:53.992 + [[ Linux == \L\i\n\u\x ]] 00:00:53.992 + sudo dmesg -T 00:00:53.992 + sudo dmesg --clear 00:00:53.992 + dmesg_pid=1083974 00:00:53.992 + [[ Fedora Linux == FreeBSD ]] 00:00:53.992 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.992 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:53.992 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:53.992 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:53.992 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:00:53.992 + sudo dmesg -Tw 00:00:53.992 + [[ -x /usr/src/fio-static/fio ]] 00:00:53.992 + export FIO_BIN=/usr/src/fio-static/fio 00:00:53.992 + FIO_BIN=/usr/src/fio-static/fio 00:00:53.992 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\c\r\y\p\t\o\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:53.992 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:53.992 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:53.992 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.992 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:53.992 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:53.992 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.992 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:53.992 + spdk/autorun.sh /var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf 00:00:53.992 Test configuration: 00:00:53.992 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:53.992 SPDK_TEST_BLOCKDEV=1 00:00:53.992 SPDK_TEST_ISAL=1 00:00:53.992 SPDK_TEST_CRYPTO=1 00:00:53.992 SPDK_TEST_REDUCE=1 00:00:53.992 SPDK_TEST_VBDEV_COMPRESS=1 00:00:53.992 SPDK_RUN_UBSAN=1 00:00:53.992 RUN_NIGHTLY=1 11:53:01 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:00:53.992 11:53:01 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:53.992 11:53:01 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:53.993 11:53:01 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:53.993 11:53:01 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.993 11:53:01 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.993 11:53:01 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.993 11:53:01 -- paths/export.sh@5 -- $ export PATH 00:00:53.993 11:53:01 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:53.993 11:53:01 -- common/autobuild_common.sh@437 -- $ out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:00:53.993 11:53:01 -- common/autobuild_common.sh@438 -- $ date +%s 00:00:53.993 11:53:01 -- common/autobuild_common.sh@438 -- $ mktemp -dt spdk_1721901181.XXXXXX 00:00:53.993 11:53:01 -- common/autobuild_common.sh@438 -- $ SPDK_WORKSPACE=/tmp/spdk_1721901181.v2Dkni 00:00:53.993 11:53:01 -- common/autobuild_common.sh@440 -- $ [[ -n '' ]] 00:00:53.993 11:53:01 -- common/autobuild_common.sh@444 -- $ '[' -n '' ']' 00:00:53.993 11:53:01 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/' 00:00:53.993 11:53:01 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:53.993 11:53:01 -- common/autobuild_common.sh@453 -- $ scanbuild='scan-build -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/crypto-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:53.993 11:53:01 -- common/autobuild_common.sh@454 -- $ get_config_params 00:00:53.993 11:53:01 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:00:53.993 11:53:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.993 11:53:01 -- common/autobuild_common.sh@454 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk' 00:00:53.993 11:53:01 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:53.993 11:53:01 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:53.993 11:53:01 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:00:53.993 11:53:01 -- spdk/autobuild.sh@16 -- $ date -u 00:00:53.993 Thu Jul 25 09:53:01 AM UTC 2024 00:00:53.993 11:53:01 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:54.252 LTS-60-gdbef7efac 00:00:54.252 11:53:01 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:54.252 11:53:01 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:54.252 11:53:01 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:54.252 11:53:01 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:00:54.252 11:53:01 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:00:54.252 11:53:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.252 ************************************ 00:00:54.252 START TEST ubsan 00:00:54.252 ************************************ 00:00:54.252 11:53:01 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:00:54.252 using ubsan 00:00:54.253 00:00:54.253 real 0m0.000s 00:00:54.253 user 0m0.000s 00:00:54.253 sys 0m0.000s 00:00:54.253 11:53:01 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:00:54.253 11:53:01 -- common/autotest_common.sh@10 -- $ set +x 00:00:54.253 ************************************ 00:00:54.253 END TEST ubsan 00:00:54.253 ************************************ 00:00:54.253 11:53:01 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:54.253 11:53:01 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:54.253 11:53:01 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:54.253 11:53:01 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/crypto-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --with-vbdev-compress --with-dpdk-compressdev --with-crypto --enable-ubsan --enable-coverage --with-ublk --with-shared 00:00:54.253 Using default SPDK env in /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:00:54.253 Using default DPDK in /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:00:54.511 Using 'verbs' RDMA provider 00:01:10.336 Configuring ISA-L (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:01:22.587 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:01:22.587 Creating mk/config.mk...done. 00:01:22.587 Creating mk/cc.flags.mk...done. 00:01:22.587 Type 'make' to build. 00:01:22.587 11:53:28 -- spdk/autobuild.sh@69 -- $ run_test make make -j72 00:01:22.587 11:53:28 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:01:22.587 11:53:28 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:01:22.588 11:53:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.588 ************************************ 00:01:22.588 START TEST make 00:01:22.588 ************************************ 00:01:22.588 11:53:28 -- common/autotest_common.sh@1104 -- $ make -j72 00:01:22.588 make[1]: Nothing to be done for 'all'. 00:01:54.675 The Meson build system 00:01:54.675 Version: 1.3.1 00:01:54.675 Source dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk 00:01:54.675 Build dir: /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp 00:01:54.675 Build type: native build 00:01:54.675 Program cat found: YES (/usr/bin/cat) 00:01:54.675 Project name: DPDK 00:01:54.675 Project version: 23.11.0 00:01:54.675 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:54.675 C linker for the host machine: cc ld.bfd 2.39-16 00:01:54.675 Host machine cpu family: x86_64 00:01:54.675 Host machine cpu: x86_64 00:01:54.675 Message: ## Building in Developer Mode ## 00:01:54.675 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:54.675 Program check-symbols.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:54.675 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:54.675 Program python3 found: YES (/usr/bin/python3) 00:01:54.675 Program cat found: YES (/usr/bin/cat) 00:01:54.675 Compiler for C supports arguments -march=native: YES 00:01:54.675 Checking for size of "void *" : 8 00:01:54.675 Checking for size of "void *" : 8 (cached) 00:01:54.675 Library m found: YES 00:01:54.675 Library numa found: YES 00:01:54.675 Has header "numaif.h" : YES 00:01:54.675 Library fdt found: NO 00:01:54.675 Library execinfo found: NO 00:01:54.675 Has header "execinfo.h" : YES 00:01:54.675 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:54.675 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:54.675 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:54.675 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:54.675 Run-time dependency openssl found: YES 3.0.9 00:01:54.675 Run-time dependency libpcap found: YES 1.10.4 00:01:54.675 Has header "pcap.h" with dependency libpcap: YES 00:01:54.675 Compiler for C supports arguments -Wcast-qual: YES 00:01:54.675 Compiler for C supports arguments -Wdeprecated: YES 00:01:54.675 Compiler for C supports arguments -Wformat: YES 00:01:54.675 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:54.675 Compiler for C supports arguments -Wformat-security: NO 00:01:54.675 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:54.675 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:54.675 Compiler for C supports arguments -Wnested-externs: YES 00:01:54.675 Compiler for C supports arguments -Wold-style-definition: YES 00:01:54.675 Compiler for C supports arguments -Wpointer-arith: YES 00:01:54.675 Compiler for C supports arguments -Wsign-compare: YES 00:01:54.675 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:54.675 Compiler for C supports arguments -Wundef: YES 00:01:54.675 Compiler for C supports arguments -Wwrite-strings: YES 00:01:54.675 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:54.675 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:54.675 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:54.675 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:54.675 Program objdump found: YES (/usr/bin/objdump) 00:01:54.675 Compiler for C supports arguments -mavx512f: YES 00:01:54.675 Checking if "AVX512 checking" compiles: YES 00:01:54.675 Fetching value of define "__SSE4_2__" : 1 00:01:54.675 Fetching value of define "__AES__" : 1 00:01:54.675 Fetching value of define "__AVX__" : 1 00:01:54.675 Fetching value of define "__AVX2__" : 1 00:01:54.675 Fetching value of define "__AVX512BW__" : 1 00:01:54.675 Fetching value of define "__AVX512CD__" : 1 00:01:54.675 Fetching value of define "__AVX512DQ__" : 1 00:01:54.675 Fetching value of define "__AVX512F__" : 1 00:01:54.675 Fetching value of define "__AVX512VL__" : 1 00:01:54.675 Fetching value of define "__PCLMUL__" : 1 00:01:54.675 Fetching value of define "__RDRND__" : 1 00:01:54.675 Fetching value of define "__RDSEED__" : 1 00:01:54.675 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:54.675 Fetching value of define "__znver1__" : (undefined) 00:01:54.675 Fetching value of define "__znver2__" : (undefined) 00:01:54.675 Fetching value of define "__znver3__" : (undefined) 00:01:54.675 Fetching value of define "__znver4__" : (undefined) 00:01:54.675 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:54.675 Message: lib/log: Defining dependency "log" 00:01:54.675 Message: lib/kvargs: Defining dependency "kvargs" 00:01:54.675 Message: lib/telemetry: Defining dependency "telemetry" 00:01:54.675 Checking for function "getentropy" : NO 00:01:54.675 Message: lib/eal: Defining dependency "eal" 00:01:54.675 Message: lib/ring: Defining dependency "ring" 00:01:54.675 Message: lib/rcu: Defining dependency "rcu" 00:01:54.675 Message: lib/mempool: Defining dependency "mempool" 00:01:54.675 Message: lib/mbuf: Defining dependency "mbuf" 00:01:54.675 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:54.675 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:54.675 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:54.675 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:54.675 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:54.675 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:54.675 Compiler for C supports arguments -mpclmul: YES 00:01:54.675 Compiler for C supports arguments -maes: YES 00:01:54.675 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:54.675 Compiler for C supports arguments -mavx512bw: YES 00:01:54.675 Compiler for C supports arguments -mavx512dq: YES 00:01:54.675 Compiler for C supports arguments -mavx512vl: YES 00:01:54.675 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:54.675 Compiler for C supports arguments -mavx2: YES 00:01:54.675 Compiler for C supports arguments -mavx: YES 00:01:54.675 Message: lib/net: Defining dependency "net" 00:01:54.675 Message: lib/meter: Defining dependency "meter" 00:01:54.675 Message: lib/ethdev: Defining dependency "ethdev" 00:01:54.675 Message: lib/pci: Defining dependency "pci" 00:01:54.675 Message: lib/cmdline: Defining dependency "cmdline" 00:01:54.676 Message: lib/hash: Defining dependency "hash" 00:01:54.676 Message: lib/timer: Defining dependency "timer" 00:01:54.676 Message: lib/compressdev: Defining dependency "compressdev" 00:01:54.676 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:54.676 Message: lib/dmadev: Defining dependency "dmadev" 00:01:54.676 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:54.676 Message: lib/power: Defining dependency "power" 00:01:54.676 Message: lib/reorder: Defining dependency "reorder" 00:01:54.676 Message: lib/security: Defining dependency "security" 00:01:54.676 Has header "linux/userfaultfd.h" : YES 00:01:54.676 Has header "linux/vduse.h" : YES 00:01:54.676 Message: lib/vhost: Defining dependency "vhost" 00:01:54.676 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:54.676 Message: drivers/bus/auxiliary: Defining dependency "bus_auxiliary" 00:01:54.676 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:54.676 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:54.676 Compiler for C supports arguments -std=c11: YES 00:01:54.676 Compiler for C supports arguments -Wno-strict-prototypes: YES 00:01:54.676 Compiler for C supports arguments -D_BSD_SOURCE: YES 00:01:54.676 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES 00:01:54.676 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES 00:01:54.676 Run-time dependency libmlx5 found: YES 1.24.44.0 00:01:54.676 Run-time dependency libibverbs found: YES 1.14.44.0 00:01:54.676 Library mtcr_ul found: NO 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_ESP" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_RX_HASH_IPSEC_SPI" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_ACCESS_RELAXED_ORDERING " with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQE_RES_FORMAT_CSUM_STRIDX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_MASK_TUNNEL_OFFLOADS" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_MPW_ALLOWED" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CONTEXT_FLAGS_CQE_128B_COMP" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_CQ_INIT_ATTR_FLAGS_CQE_PAD" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_flow_action_packet_reformat" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_FLOW_SPEC_MPLS" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAGS_PCI_WRITE_END_PADDING" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "IBV_WQ_FLAG_RX_END_PADDING" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_devx_port" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_query_port" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_ib_port" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_create" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_COUNTERS_DEVX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_FLOW_ACTION_DEFAULT_MISS" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_obj_query_async" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_qp_query" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_pp_alloc" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_devx_tir" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_devx_get_event" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_meter" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_MMAP_GET_NC_PAGES_CMD" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_NIC_RX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_DOMAIN_TYPE_FDB" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_push_vlan" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_alloc_var" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ENHANCED_MPSW" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_SEND_EN" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_WAIT" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_OPCODE_ACCESS_ASO" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_40000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseKR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseCR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseSR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "SUPPORTED_56000baseLR4_Full" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_25000baseCR_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_50000baseCR2_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/ethtool.h" has symbol "ETHTOOL_LINK_MODE_100000baseKR4_Full_BIT" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/if_link.h" has symbol "IFLA_NUM_VF" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/if_link.h" has symbol "IFLA_EXT_MASK" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/if_link.h" has symbol "IFLA_PHYS_SWITCH_ID" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/if_link.h" has symbol "IFLA_PHYS_PORT_NAME" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NL_NLDEV" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_GET" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_CMD_PORT_GET" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_DEV_NAME" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_PORT_STATE" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "rdma/rdma_netlink.h" has symbol "RDMA_NLDEV_ATTR_NDEV_INDEX" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_domain" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_flow_sampler" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_set_reclaim_device_memory" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_array" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "linux/devlink.h" has symbol "DEVLINK_GENL_NAME" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_aso" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "INFINIBAND_VERBS_H" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5_WQE_UMR_CTRL_FLAG_INLINE" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dump_dr_rule" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "MLX5DV_DR_ACTION_FLAGS_ASO_CT_DIRECTION_INITIATOR" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_domain_allow_duplicate_rules" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "ibv_reg_mr_iova" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "ibv_import_device" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_dr_action_create_dest_root_table" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/mlx5dv.h" has symbol "mlx5dv_create_steering_anchor" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Header "infiniband/verbs.h" has symbol "ibv_is_fork_initialized" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Checking whether type "struct mlx5dv_sw_parsing_caps" has member "sw_parsing_offloads" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Checking whether type "struct ibv_counter_set_init_attr" has member "counter_set_id" with dependencies libmlx5, libibverbs: NO 00:01:54.676 Checking whether type "struct ibv_counters_init_attr" has member "comp_mask" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Checking whether type "struct mlx5dv_devx_uar" has member "mmap_off" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Checking whether type "struct mlx5dv_flow_matcher_attr" has member "ft_type" with dependencies libmlx5, libibverbs: YES 00:01:54.676 Configuring mlx5_autoconf.h using configuration 00:01:54.676 Message: drivers/common/mlx5: Defining dependency "common_mlx5" 00:01:54.676 Run-time dependency libcrypto found: YES 3.0.9 00:01:54.676 Library IPSec_MB found: YES 00:01:54.676 Fetching value of define "IMB_VERSION_STR" : "1.5.0" 00:01:54.677 Message: drivers/common/qat: Defining dependency "common_qat" 00:01:54.677 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:54.677 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:54.677 Library IPSec_MB found: YES 00:01:54.677 Fetching value of define "IMB_VERSION_STR" : "1.5.0" (cached) 00:01:54.677 Message: drivers/crypto/ipsec_mb: Defining dependency "crypto_ipsec_mb" 00:01:54.677 Compiler for C supports arguments -std=c11: YES (cached) 00:01:54.677 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:54.677 Message: drivers/crypto/mlx5: Defining dependency "crypto_mlx5" 00:01:54.677 Run-time dependency libisal found: NO (tried pkgconfig) 00:01:54.677 Library libisal found: NO 00:01:54.677 Message: drivers/compress/isal: Defining dependency "compress_isal" 00:01:54.677 Compiler for C supports arguments -std=c11: YES (cached) 00:01:54.677 Compiler for C supports arguments -Wno-strict-prototypes: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_BSD_SOURCE: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_DEFAULT_SOURCE: YES (cached) 00:01:54.677 Compiler for C supports arguments -D_XOPEN_SOURCE=600: YES (cached) 00:01:54.677 Message: drivers/compress/mlx5: Defining dependency "compress_mlx5" 00:01:54.677 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:54.677 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:54.677 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:54.677 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:54.677 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:54.677 Program doxygen found: YES (/usr/bin/doxygen) 00:01:54.677 Configuring doxy-api-html.conf using configuration 00:01:54.677 Configuring doxy-api-man.conf using configuration 00:01:54.677 Program mandb found: YES (/usr/bin/mandb) 00:01:54.677 Program sphinx-build found: NO 00:01:54.677 Configuring rte_build_config.h using configuration 00:01:54.677 Message: 00:01:54.677 ================= 00:01:54.677 Applications Enabled 00:01:54.677 ================= 00:01:54.677 00:01:54.677 apps: 00:01:54.677 00:01:54.677 00:01:54.677 Message: 00:01:54.677 ================= 00:01:54.677 Libraries Enabled 00:01:54.677 ================= 00:01:54.677 00:01:54.677 libs: 00:01:54.677 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:54.677 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:54.677 cryptodev, dmadev, power, reorder, security, vhost, 00:01:54.677 00:01:54.677 Message: 00:01:54.677 =============== 00:01:54.677 Drivers Enabled 00:01:54.677 =============== 00:01:54.677 00:01:54.677 common: 00:01:54.677 mlx5, qat, 00:01:54.677 bus: 00:01:54.677 auxiliary, pci, vdev, 00:01:54.677 mempool: 00:01:54.677 ring, 00:01:54.677 dma: 00:01:54.677 00:01:54.677 net: 00:01:54.677 00:01:54.677 crypto: 00:01:54.677 ipsec_mb, mlx5, 00:01:54.677 compress: 00:01:54.677 isal, mlx5, 00:01:54.677 vdpa: 00:01:54.677 00:01:54.677 00:01:54.677 Message: 00:01:54.677 ================= 00:01:54.677 Content Skipped 00:01:54.677 ================= 00:01:54.677 00:01:54.677 apps: 00:01:54.677 dumpcap: explicitly disabled via build config 00:01:54.677 graph: explicitly disabled via build config 00:01:54.677 pdump: explicitly disabled via build config 00:01:54.677 proc-info: explicitly disabled via build config 00:01:54.677 test-acl: explicitly disabled via build config 00:01:54.677 test-bbdev: explicitly disabled via build config 00:01:54.677 test-cmdline: explicitly disabled via build config 00:01:54.677 test-compress-perf: explicitly disabled via build config 00:01:54.677 test-crypto-perf: explicitly disabled via build config 00:01:54.677 test-dma-perf: explicitly disabled via build config 00:01:54.677 test-eventdev: explicitly disabled via build config 00:01:54.677 test-fib: explicitly disabled via build config 00:01:54.677 test-flow-perf: explicitly disabled via build config 00:01:54.677 test-gpudev: explicitly disabled via build config 00:01:54.677 test-mldev: explicitly disabled via build config 00:01:54.677 test-pipeline: explicitly disabled via build config 00:01:54.677 test-pmd: explicitly disabled via build config 00:01:54.677 test-regex: explicitly disabled via build config 00:01:54.677 test-sad: explicitly disabled via build config 00:01:54.677 test-security-perf: explicitly disabled via build config 00:01:54.677 00:01:54.677 libs: 00:01:54.677 metrics: explicitly disabled via build config 00:01:54.677 acl: explicitly disabled via build config 00:01:54.677 bbdev: explicitly disabled via build config 00:01:54.677 bitratestats: explicitly disabled via build config 00:01:54.677 bpf: explicitly disabled via build config 00:01:54.677 cfgfile: explicitly disabled via build config 00:01:54.677 distributor: explicitly disabled via build config 00:01:54.677 efd: explicitly disabled via build config 00:01:54.677 eventdev: explicitly disabled via build config 00:01:54.677 dispatcher: explicitly disabled via build config 00:01:54.677 gpudev: explicitly disabled via build config 00:01:54.677 gro: explicitly disabled via build config 00:01:54.677 gso: explicitly disabled via build config 00:01:54.677 ip_frag: explicitly disabled via build config 00:01:54.677 jobstats: explicitly disabled via build config 00:01:54.677 latencystats: explicitly disabled via build config 00:01:54.677 lpm: explicitly disabled via build config 00:01:54.677 member: explicitly disabled via build config 00:01:54.677 pcapng: explicitly disabled via build config 00:01:54.677 rawdev: explicitly disabled via build config 00:01:54.677 regexdev: explicitly disabled via build config 00:01:54.677 mldev: explicitly disabled via build config 00:01:54.677 rib: explicitly disabled via build config 00:01:54.677 sched: explicitly disabled via build config 00:01:54.677 stack: explicitly disabled via build config 00:01:54.677 ipsec: explicitly disabled via build config 00:01:54.677 pdcp: explicitly disabled via build config 00:01:54.677 fib: explicitly disabled via build config 00:01:54.677 port: explicitly disabled via build config 00:01:54.677 pdump: explicitly disabled via build config 00:01:54.677 table: explicitly disabled via build config 00:01:54.677 pipeline: explicitly disabled via build config 00:01:54.677 graph: explicitly disabled via build config 00:01:54.677 node: explicitly disabled via build config 00:01:54.677 00:01:54.677 drivers: 00:01:54.677 common/cpt: not in enabled drivers build config 00:01:54.677 common/dpaax: not in enabled drivers build config 00:01:54.677 common/iavf: not in enabled drivers build config 00:01:54.677 common/idpf: not in enabled drivers build config 00:01:54.677 common/mvep: not in enabled drivers build config 00:01:54.677 common/octeontx: not in enabled drivers build config 00:01:54.677 bus/cdx: not in enabled drivers build config 00:01:54.677 bus/dpaa: not in enabled drivers build config 00:01:54.677 bus/fslmc: not in enabled drivers build config 00:01:54.677 bus/ifpga: not in enabled drivers build config 00:01:54.677 bus/platform: not in enabled drivers build config 00:01:54.677 bus/vmbus: not in enabled drivers build config 00:01:54.677 common/cnxk: not in enabled drivers build config 00:01:54.677 common/nfp: not in enabled drivers build config 00:01:54.677 common/sfc_efx: not in enabled drivers build config 00:01:54.677 mempool/bucket: not in enabled drivers build config 00:01:54.677 mempool/cnxk: not in enabled drivers build config 00:01:54.677 mempool/dpaa: not in enabled drivers build config 00:01:54.677 mempool/dpaa2: not in enabled drivers build config 00:01:54.677 mempool/octeontx: not in enabled drivers build config 00:01:54.677 mempool/stack: not in enabled drivers build config 00:01:54.677 dma/cnxk: not in enabled drivers build config 00:01:54.677 dma/dpaa: not in enabled drivers build config 00:01:54.677 dma/dpaa2: not in enabled drivers build config 00:01:54.677 dma/hisilicon: not in enabled drivers build config 00:01:54.677 dma/idxd: not in enabled drivers build config 00:01:54.677 dma/ioat: not in enabled drivers build config 00:01:54.677 dma/skeleton: not in enabled drivers build config 00:01:54.677 net/af_packet: not in enabled drivers build config 00:01:54.677 net/af_xdp: not in enabled drivers build config 00:01:54.677 net/ark: not in enabled drivers build config 00:01:54.677 net/atlantic: not in enabled drivers build config 00:01:54.677 net/avp: not in enabled drivers build config 00:01:54.677 net/axgbe: not in enabled drivers build config 00:01:54.677 net/bnx2x: not in enabled drivers build config 00:01:54.677 net/bnxt: not in enabled drivers build config 00:01:54.677 net/bonding: not in enabled drivers build config 00:01:54.677 net/cnxk: not in enabled drivers build config 00:01:54.677 net/cpfl: not in enabled drivers build config 00:01:54.677 net/cxgbe: not in enabled drivers build config 00:01:54.677 net/dpaa: not in enabled drivers build config 00:01:54.677 net/dpaa2: not in enabled drivers build config 00:01:54.678 net/e1000: not in enabled drivers build config 00:01:54.678 net/ena: not in enabled drivers build config 00:01:54.678 net/enetc: not in enabled drivers build config 00:01:54.678 net/enetfec: not in enabled drivers build config 00:01:54.678 net/enic: not in enabled drivers build config 00:01:54.678 net/failsafe: not in enabled drivers build config 00:01:54.678 net/fm10k: not in enabled drivers build config 00:01:54.678 net/gve: not in enabled drivers build config 00:01:54.678 net/hinic: not in enabled drivers build config 00:01:54.678 net/hns3: not in enabled drivers build config 00:01:54.678 net/i40e: not in enabled drivers build config 00:01:54.678 net/iavf: not in enabled drivers build config 00:01:54.678 net/ice: not in enabled drivers build config 00:01:54.678 net/idpf: not in enabled drivers build config 00:01:54.678 net/igc: not in enabled drivers build config 00:01:54.678 net/ionic: not in enabled drivers build config 00:01:54.678 net/ipn3ke: not in enabled drivers build config 00:01:54.678 net/ixgbe: not in enabled drivers build config 00:01:54.678 net/mana: not in enabled drivers build config 00:01:54.678 net/memif: not in enabled drivers build config 00:01:54.678 net/mlx4: not in enabled drivers build config 00:01:54.678 net/mlx5: not in enabled drivers build config 00:01:54.678 net/mvneta: not in enabled drivers build config 00:01:54.678 net/mvpp2: not in enabled drivers build config 00:01:54.678 net/netvsc: not in enabled drivers build config 00:01:54.678 net/nfb: not in enabled drivers build config 00:01:54.678 net/nfp: not in enabled drivers build config 00:01:54.678 net/ngbe: not in enabled drivers build config 00:01:54.678 net/null: not in enabled drivers build config 00:01:54.678 net/octeontx: not in enabled drivers build config 00:01:54.678 net/octeon_ep: not in enabled drivers build config 00:01:54.678 net/pcap: not in enabled drivers build config 00:01:54.678 net/pfe: not in enabled drivers build config 00:01:54.678 net/qede: not in enabled drivers build config 00:01:54.678 net/ring: not in enabled drivers build config 00:01:54.678 net/sfc: not in enabled drivers build config 00:01:54.678 net/softnic: not in enabled drivers build config 00:01:54.678 net/tap: not in enabled drivers build config 00:01:54.678 net/thunderx: not in enabled drivers build config 00:01:54.678 net/txgbe: not in enabled drivers build config 00:01:54.678 net/vdev_netvsc: not in enabled drivers build config 00:01:54.678 net/vhost: not in enabled drivers build config 00:01:54.678 net/virtio: not in enabled drivers build config 00:01:54.678 net/vmxnet3: not in enabled drivers build config 00:01:54.678 raw/*: missing internal dependency, "rawdev" 00:01:54.678 crypto/armv8: not in enabled drivers build config 00:01:54.678 crypto/bcmfs: not in enabled drivers build config 00:01:54.678 crypto/caam_jr: not in enabled drivers build config 00:01:54.678 crypto/ccp: not in enabled drivers build config 00:01:54.678 crypto/cnxk: not in enabled drivers build config 00:01:54.678 crypto/dpaa_sec: not in enabled drivers build config 00:01:54.678 crypto/dpaa2_sec: not in enabled drivers build config 00:01:54.678 crypto/mvsam: not in enabled drivers build config 00:01:54.678 crypto/nitrox: not in enabled drivers build config 00:01:54.678 crypto/null: not in enabled drivers build config 00:01:54.678 crypto/octeontx: not in enabled drivers build config 00:01:54.678 crypto/openssl: not in enabled drivers build config 00:01:54.678 crypto/scheduler: not in enabled drivers build config 00:01:54.678 crypto/uadk: not in enabled drivers build config 00:01:54.678 crypto/virtio: not in enabled drivers build config 00:01:54.678 compress/octeontx: not in enabled drivers build config 00:01:54.678 compress/zlib: not in enabled drivers build config 00:01:54.678 regex/*: missing internal dependency, "regexdev" 00:01:54.678 ml/*: missing internal dependency, "mldev" 00:01:54.678 vdpa/ifc: not in enabled drivers build config 00:01:54.678 vdpa/mlx5: not in enabled drivers build config 00:01:54.678 vdpa/nfp: not in enabled drivers build config 00:01:54.678 vdpa/sfc: not in enabled drivers build config 00:01:54.678 event/*: missing internal dependency, "eventdev" 00:01:54.678 baseband/*: missing internal dependency, "bbdev" 00:01:54.678 gpu/*: missing internal dependency, "gpudev" 00:01:54.678 00:01:54.678 00:01:54.678 Build targets in project: 115 00:01:54.678 00:01:54.678 DPDK 23.11.0 00:01:54.678 00:01:54.678 User defined options 00:01:54.678 buildtype : debug 00:01:54.678 default_library : shared 00:01:54.678 libdir : lib 00:01:54.678 prefix : /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:01:54.678 c_args : -I/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -DNO_COMPAT_IMB_API_053 -I/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l -fPIC -Werror -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds 00:01:54.678 c_link_args : -L/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib -L/var/jenkins/workspace/crypto-phy-autotest/spdk/isa-l/.libs -lisal 00:01:54.678 cpu_instruction_set: native 00:01:54.678 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:54.678 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,pcapng,bbdev 00:01:54.678 enable_docs : false 00:01:54.678 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,crypto/qat,compress/qat,common/qat,common/mlx5,bus/auxiliary,crypto,crypto/aesni_mb,crypto/mlx5,crypto/ipsec_mb,compress,compress/isal,compress/mlx5 00:01:54.678 enable_kmods : false 00:01:54.678 tests : false 00:01:54.678 00:01:54.678 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:54.938 ninja: Entering directory `/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp' 00:01:55.202 [1/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.202 [2/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.202 [3/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.202 [4/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.202 [5/370] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.202 [6/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.202 [7/370] Linking static target lib/librte_kvargs.a 00:01:55.202 [8/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.202 [9/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.202 [10/370] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:55.202 [11/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.202 [12/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.202 [13/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.202 [14/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.202 [15/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.202 [16/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:55.202 [17/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.202 [18/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.202 [19/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.202 [20/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.202 [21/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.202 [22/370] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:55.202 [23/370] Linking static target lib/librte_log.a 00:01:55.202 [24/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.202 [25/370] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.462 [26/370] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.462 [27/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.723 [28/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.723 [29/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.723 [30/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.723 [31/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.723 [32/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.723 [33/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.723 [34/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.723 [35/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.723 [36/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.723 [37/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.723 [38/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.723 [39/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.723 [40/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.723 [41/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.723 [42/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.723 [43/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.723 [44/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.723 [45/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.723 [46/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.723 [47/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.723 [48/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.723 [49/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.723 [50/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.723 [51/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.723 [52/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.723 [53/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.723 [54/370] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.723 [55/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.723 [56/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.723 [57/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.723 [58/370] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.723 [59/370] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.723 [60/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.723 [61/370] Linking static target lib/librte_ring.a 00:01:55.723 [62/370] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.723 [63/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.723 [64/370] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.723 [65/370] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.723 [66/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.723 [67/370] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.723 [68/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.723 [69/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.723 [70/370] Linking static target lib/librte_pci.a 00:01:55.723 [71/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.723 [72/370] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.723 [73/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.723 [74/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.723 [75/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.723 [76/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.723 [77/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.723 [78/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.723 [79/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.723 [80/370] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.723 [81/370] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:55.723 [82/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.723 [83/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.723 [84/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.723 [85/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.723 [86/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.723 [87/370] Linking static target lib/librte_telemetry.a 00:01:55.723 [88/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.723 [89/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.723 [90/370] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:55.723 [91/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.723 [92/370] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.723 [93/370] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.723 [94/370] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.723 [95/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.723 [96/370] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.723 [97/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.723 [98/370] Linking static target lib/librte_meter.a 00:01:55.723 [99/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.723 [100/370] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.985 [101/370] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.985 [102/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.985 [103/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.985 [104/370] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.985 [105/370] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.985 [106/370] Linking static target lib/librte_rcu.a 00:01:55.985 [107/370] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.985 [108/370] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.985 [109/370] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.985 [110/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_params.c.o 00:01:55.985 [111/370] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.985 [112/370] Linking static target lib/librte_net.a 00:01:55.985 [113/370] Linking static target lib/librte_mempool.a 00:01:55.985 [114/370] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.985 [115/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.985 [116/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.985 [117/370] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.985 [118/370] Linking static target lib/librte_eal.a 00:01:55.985 [119/370] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.985 [120/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.985 [121/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.985 [122/370] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.985 [123/370] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.985 [124/370] Linking target lib/librte_log.so.24.0 00:01:56.247 [125/370] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.247 [126/370] Linking static target lib/librte_mbuf.a 00:01:56.247 [127/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.247 [128/370] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.247 [129/370] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.247 [130/370] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.247 [131/370] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:56.247 [132/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.247 [133/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.247 [134/370] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.247 [135/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:56.247 [136/370] Linking static target lib/librte_cmdline.a 00:01:56.247 [137/370] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.247 [138/370] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:56.247 [139/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:56.247 [140/370] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.247 [141/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.247 [142/370] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:56.247 [143/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.247 [144/370] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.247 [145/370] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.247 [146/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.247 [147/370] Linking static target lib/librte_timer.a 00:01:56.247 [148/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.247 [149/370] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:56.247 [150/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_auxiliary_common.c.o 00:01:56.247 [151/370] Linking target lib/librte_kvargs.so.24.0 00:01:56.247 [152/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.247 [153/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_logs.c.o 00:01:56.247 [154/370] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:56.247 [155/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:56.247 [156/370] Compiling C object drivers/libtmp_rte_bus_auxiliary.a.p/bus_auxiliary_linux_auxiliary.c.o 00:01:56.247 [157/370] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:56.508 [158/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:56.508 [159/370] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:56.508 [160/370] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:56.508 [161/370] Linking static target drivers/libtmp_rte_bus_auxiliary.a 00:01:56.508 [162/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:56.508 [163/370] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:56.508 [164/370] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:56.508 [165/370] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.508 [166/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:56.508 [167/370] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:56.508 [168/370] Linking static target lib/librte_dmadev.a 00:01:56.508 [169/370] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:56.508 [170/370] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.508 [171/370] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:56.508 [172/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:56.508 [173/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.508 [174/370] Linking static target lib/librte_compressdev.a 00:01:56.508 [175/370] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:56.508 [176/370] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:56.508 [177/370] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.508 [178/370] Linking static target lib/librte_reorder.a 00:01:56.508 [179/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:56.508 [180/370] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:56.508 [181/370] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:56.508 [182/370] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:56.508 [183/370] Linking target lib/librte_telemetry.so.24.0 00:01:56.508 [184/370] Linking static target lib/librte_power.a 00:01:56.508 [185/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_glue.c.o 00:01:56.508 [186/370] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:56.508 [187/370] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:56.508 [188/370] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:56.508 [189/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.508 [190/370] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:56.508 [191/370] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:56.508 [192/370] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:56.508 [193/370] Linking static target lib/librte_security.a 00:01:56.769 [194/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mp.c.o 00:01:56.769 [195/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_pci.c.o 00:01:56.769 [196/370] Generating drivers/rte_bus_auxiliary.pmd.c with a custom command 00:01:56.769 [197/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_malloc.c.o 00:01:56.769 [198/370] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:56.769 [199/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common.c.o 00:01:56.769 [200/370] Compiling C object drivers/librte_bus_auxiliary.a.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:56.770 [201/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_devx.c.o 00:01:56.770 [202/370] Compiling C object drivers/librte_bus_auxiliary.so.24.0.p/meson-generated_.._rte_bus_auxiliary.pmd.c.o 00:01:56.770 [203/370] Linking static target drivers/librte_bus_auxiliary.a 00:01:56.770 [204/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_utils.c.o 00:01:56.770 [205/370] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:56.770 [206/370] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.770 [207/370] Linking static target lib/librte_hash.a 00:01:56.770 [208/370] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.770 [209/370] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:56.770 [210/370] Linking static target drivers/librte_bus_vdev.a 00:01:56.770 [211/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:56.770 [212/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.770 [213/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_common.c.o 00:01:56.770 [214/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_pf2vf.c.o 00:01:56.770 [215/370] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.770 [216/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_auxiliary.c.o 00:01:56.770 [217/370] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.770 [218/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_verbs.c.o 00:01:56.770 [219/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen2.c.o 00:01:56.770 [220/370] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.770 [221/370] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.770 [222/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen3.c.o 00:01:57.029 [223/370] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [224/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen1.c.o 00:01:57.029 [225/370] Linking static target drivers/librte_bus_pci.a 00:01:57.029 [226/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen1.c.o 00:01:57.029 [227/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_nl.c.o 00:01:57.029 [228/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen2.c.o 00:01:57.029 [229/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_device.c.o 00:01:57.029 [230/370] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [231/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp_pmd.c.o 00:01:57.029 [232/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_common_mr.c.o 00:01:57.029 [233/370] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [234/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_dev_qat_dev_gen4.c.o 00:01:57.029 [235/370] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [236/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen3.c.o 00:01:57.029 [237/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.029 [238/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_dev_qat_comp_pmd_gen4.c.o 00:01:57.029 [239/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym.c.o 00:01:57.029 [240/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen2.c.o 00:01:57.029 [241/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_linux_mlx5_common_os.c.o 00:01:57.029 [242/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_asym_pmd_gen1.c.o 00:01:57.029 [243/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_crypto.c.o 00:01:57.029 [244/370] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.029 [245/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/common_qat_qat_qp.c.o 00:01:57.029 [246/370] Linking static target lib/librte_cryptodev.a 00:01:57.029 [247/370] Generating drivers/rte_bus_auxiliary.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [248/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_ops.c.o 00:01:57.029 [249/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_ipsec_mb_private.c.o 00:01:57.029 [250/370] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [251/370] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.029 [252/370] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.030 [253/370] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.030 [254/370] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.288 [255/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto.c.o 00:01:57.288 [256/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_dek.c.o 00:01:57.288 [257/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_xts.c.o 00:01:57.288 [258/370] Compiling C object drivers/libtmp_rte_crypto_mlx5.a.p/crypto_mlx5_mlx5_crypto_gcm.c.o 00:01:57.288 [259/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/compress_qat_qat_comp.c.o 00:01:57.288 [260/370] Linking static target drivers/libtmp_rte_crypto_mlx5.a 00:01:57.288 [261/370] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd.c.o 00:01:57.288 [262/370] Compiling C object drivers/libtmp_rte_compress_isal.a.p/compress_isal_isal_compress_pmd_ops.c.o 00:01:57.288 [263/370] Linking static target drivers/libtmp_rte_compress_isal.a 00:01:57.288 [264/370] Compiling C object drivers/libtmp_rte_common_mlx5.a.p/common_mlx5_mlx5_devx_cmds.c.o 00:01:57.288 [265/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_gcm.c.o 00:01:57.288 [266/370] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.288 [267/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_chacha_poly.c.o 00:01:57.288 [268/370] Linking static target drivers/libtmp_rte_common_mlx5.a 00:01:57.288 [269/370] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.288 [270/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_kasumi.c.o 00:01:57.288 [271/370] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.288 [272/370] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.288 [273/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_zuc.c.o 00:01:57.288 [274/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen4.c.o 00:01:57.288 [275/370] Linking static target drivers/librte_mempool_ring.a 00:01:57.288 [276/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_sym_session.c.o 00:01:57.288 [277/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_snow3g.c.o 00:01:57.288 [278/370] Compiling C object drivers/libtmp_rte_compress_mlx5.a.p/compress_mlx5_mlx5_compress.c.o 00:01:57.288 [279/370] Linking static target drivers/libtmp_rte_compress_mlx5.a 00:01:57.288 [280/370] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.545 [281/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_crypto_pmd_gen3.c.o 00:01:57.545 [282/370] Generating drivers/rte_crypto_mlx5.pmd.c with a custom command 00:01:57.546 [283/370] Compiling C object drivers/libtmp_rte_crypto_ipsec_mb.a.p/crypto_ipsec_mb_pmd_aesni_mb.c.o 00:01:57.546 [284/370] Generating drivers/rte_compress_isal.pmd.c with a custom command 00:01:57.546 [285/370] Linking static target drivers/libtmp_rte_crypto_ipsec_mb.a 00:01:57.546 [286/370] Compiling C object drivers/librte_crypto_mlx5.so.24.0.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:57.546 [287/370] Compiling C object drivers/librte_compress_isal.so.24.0.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:57.546 [288/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_dev_qat_sym_pmd_gen1.c.o 00:01:57.546 [289/370] Compiling C object drivers/librte_crypto_mlx5.a.p/meson-generated_.._rte_crypto_mlx5.pmd.c.o 00:01:57.546 [290/370] Compiling C object drivers/librte_compress_isal.a.p/meson-generated_.._rte_compress_isal.pmd.c.o 00:01:57.546 [291/370] Linking static target drivers/librte_crypto_mlx5.a 00:01:57.546 [292/370] Linking static target drivers/librte_compress_isal.a 00:01:57.546 [293/370] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.546 [294/370] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.546 [295/370] Generating drivers/rte_common_mlx5.pmd.c with a custom command 00:01:57.546 [296/370] Linking static target lib/librte_ethdev.a 00:01:57.546 [297/370] Generating drivers/rte_compress_mlx5.pmd.c with a custom command 00:01:57.546 [298/370] Compiling C object drivers/librte_common_mlx5.so.24.0.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:57.546 [299/370] Compiling C object drivers/librte_common_mlx5.a.p/meson-generated_.._rte_common_mlx5.pmd.c.o 00:01:57.546 [300/370] Linking static target drivers/librte_common_mlx5.a 00:01:57.546 [301/370] Compiling C object drivers/librte_compress_mlx5.so.24.0.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:57.546 [302/370] Compiling C object drivers/librte_compress_mlx5.a.p/meson-generated_.._rte_compress_mlx5.pmd.c.o 00:01:57.546 [303/370] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.546 [304/370] Linking static target drivers/librte_compress_mlx5.a 00:01:57.546 [305/370] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:57.803 [306/370] Generating drivers/rte_crypto_ipsec_mb.pmd.c with a custom command 00:01:57.803 [307/370] Compiling C object drivers/librte_crypto_ipsec_mb.so.24.0.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:57.803 [308/370] Compiling C object drivers/librte_crypto_ipsec_mb.a.p/meson-generated_.._rte_crypto_ipsec_mb.pmd.c.o 00:01:57.803 [309/370] Linking static target drivers/librte_crypto_ipsec_mb.a 00:01:58.060 [310/370] Compiling C object drivers/libtmp_rte_common_qat.a.p/crypto_qat_qat_asym.c.o 00:01:58.060 [311/370] Linking static target drivers/libtmp_rte_common_qat.a 00:01:58.318 [312/370] Generating drivers/rte_common_qat.pmd.c with a custom command 00:01:58.318 [313/370] Compiling C object drivers/librte_common_qat.a.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:58.318 [314/370] Compiling C object drivers/librte_common_qat.so.24.0.p/meson-generated_.._rte_common_qat.pmd.c.o 00:01:58.318 [315/370] Linking static target drivers/librte_common_qat.a 00:01:58.576 [316/370] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:58.576 [317/370] Linking static target lib/librte_vhost.a 00:01:59.142 [318/370] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.044 [319/370] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.567 [320/370] Generating drivers/rte_common_mlx5.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.842 [321/370] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.741 [322/370] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.741 [323/370] Linking target lib/librte_eal.so.24.0 00:02:08.741 [324/370] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:08.999 [325/370] Linking target lib/librte_pci.so.24.0 00:02:08.999 [326/370] Linking target drivers/librte_bus_vdev.so.24.0 00:02:08.999 [327/370] Linking target lib/librte_ring.so.24.0 00:02:08.999 [328/370] Linking target lib/librte_meter.so.24.0 00:02:08.999 [329/370] Linking target lib/librte_dmadev.so.24.0 00:02:08.999 [330/370] Linking target lib/librte_timer.so.24.0 00:02:08.999 [331/370] Linking target drivers/librte_bus_auxiliary.so.24.0 00:02:08.999 [332/370] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:08.999 [333/370] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:08.999 [334/370] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:08.999 [335/370] Generating symbol file drivers/librte_bus_auxiliary.so.24.0.p/librte_bus_auxiliary.so.24.0.symbols 00:02:08.999 [336/370] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:08.999 [337/370] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:08.999 [338/370] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:08.999 [339/370] Linking target drivers/librte_bus_pci.so.24.0 00:02:08.999 [340/370] Linking target lib/librte_rcu.so.24.0 00:02:08.999 [341/370] Linking target lib/librte_mempool.so.24.0 00:02:09.257 [342/370] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:09.257 [343/370] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:09.257 [344/370] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:09.257 [345/370] Linking target drivers/librte_mempool_ring.so.24.0 00:02:09.257 [346/370] Linking target lib/librte_mbuf.so.24.0 00:02:09.257 [347/370] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:09.515 [348/370] Linking target lib/librte_cryptodev.so.24.0 00:02:09.515 [349/370] Linking target lib/librte_reorder.so.24.0 00:02:09.515 [350/370] Linking target lib/librte_net.so.24.0 00:02:09.515 [351/370] Linking target lib/librte_compressdev.so.24.0 00:02:09.515 [352/370] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:09.515 [353/370] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:09.515 [354/370] Generating symbol file lib/librte_compressdev.so.24.0.p/librte_compressdev.so.24.0.symbols 00:02:09.515 [355/370] Linking target lib/librte_security.so.24.0 00:02:09.515 [356/370] Linking target lib/librte_hash.so.24.0 00:02:09.515 [357/370] Linking target lib/librte_cmdline.so.24.0 00:02:09.515 [358/370] Linking target lib/librte_ethdev.so.24.0 00:02:09.515 [359/370] Linking target drivers/librte_compress_isal.so.24.0 00:02:09.802 [360/370] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:09.802 [361/370] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:09.802 [362/370] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:09.802 [363/370] Linking target drivers/librte_common_mlx5.so.24.0 00:02:09.802 [364/370] Linking target lib/librte_power.so.24.0 00:02:09.802 [365/370] Linking target lib/librte_vhost.so.24.0 00:02:10.122 [366/370] Generating symbol file drivers/librte_common_mlx5.so.24.0.p/librte_common_mlx5.so.24.0.symbols 00:02:10.122 [367/370] Linking target drivers/librte_crypto_ipsec_mb.so.24.0 00:02:10.122 [368/370] Linking target drivers/librte_compress_mlx5.so.24.0 00:02:10.122 [369/370] Linking target drivers/librte_crypto_mlx5.so.24.0 00:02:10.122 [370/370] Linking target drivers/librte_common_qat.so.24.0 00:02:10.122 INFO: autodetecting backend as ninja 00:02:10.122 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build-tmp -j 72 00:02:11.056 CC lib/ut_mock/mock.o 00:02:11.056 CC lib/log/log.o 00:02:11.056 CC lib/log/log_deprecated.o 00:02:11.056 CC lib/log/log_flags.o 00:02:11.056 CC lib/ut/ut.o 00:02:11.056 LIB libspdk_ut_mock.a 00:02:11.056 SO libspdk_ut_mock.so.5.0 00:02:11.056 LIB libspdk_log.a 00:02:11.056 LIB libspdk_ut.a 00:02:11.056 SYMLINK libspdk_ut_mock.so 00:02:11.056 SO libspdk_log.so.6.1 00:02:11.056 SO libspdk_ut.so.1.0 00:02:11.313 SYMLINK libspdk_ut.so 00:02:11.313 SYMLINK libspdk_log.so 00:02:11.570 CC lib/ioat/ioat.o 00:02:11.570 CC lib/dma/dma.o 00:02:11.570 CC lib/util/base64.o 00:02:11.570 CC lib/util/cpuset.o 00:02:11.570 CC lib/util/bit_array.o 00:02:11.570 CC lib/util/crc16.o 00:02:11.570 CC lib/util/crc32c.o 00:02:11.570 CC lib/util/crc32.o 00:02:11.570 CC lib/util/crc32_ieee.o 00:02:11.570 CC lib/util/crc64.o 00:02:11.570 CC lib/util/dif.o 00:02:11.570 CC lib/util/hexlify.o 00:02:11.570 CC lib/util/file.o 00:02:11.570 CC lib/util/fd.o 00:02:11.570 CC lib/util/iov.o 00:02:11.570 CC lib/util/math.o 00:02:11.570 CC lib/util/pipe.o 00:02:11.570 CC lib/util/strerror_tls.o 00:02:11.570 CC lib/util/string.o 00:02:11.570 CC lib/util/xor.o 00:02:11.570 CC lib/util/uuid.o 00:02:11.570 CC lib/util/fd_group.o 00:02:11.570 CXX lib/trace_parser/trace.o 00:02:11.570 CC lib/util/zipf.o 00:02:11.570 CC lib/vfio_user/host/vfio_user_pci.o 00:02:11.570 CC lib/vfio_user/host/vfio_user.o 00:02:11.570 LIB libspdk_dma.a 00:02:11.570 SO libspdk_dma.so.3.0 00:02:11.828 LIB libspdk_ioat.a 00:02:11.828 SYMLINK libspdk_dma.so 00:02:11.828 SO libspdk_ioat.so.6.0 00:02:11.828 LIB libspdk_vfio_user.a 00:02:11.828 SYMLINK libspdk_ioat.so 00:02:11.828 SO libspdk_vfio_user.so.4.0 00:02:11.828 LIB libspdk_util.a 00:02:11.828 SYMLINK libspdk_vfio_user.so 00:02:12.086 SO libspdk_util.so.8.0 00:02:12.086 SYMLINK libspdk_util.so 00:02:12.086 LIB libspdk_trace_parser.a 00:02:12.344 SO libspdk_trace_parser.so.4.0 00:02:12.344 CC lib/json/json_parse.o 00:02:12.344 CC lib/json/json_util.o 00:02:12.344 CC lib/json/json_write.o 00:02:12.344 CC lib/rdma/rdma_verbs.o 00:02:12.344 CC lib/rdma/common.o 00:02:12.344 CC lib/conf/conf.o 00:02:12.344 CC lib/reduce/reduce.o 00:02:12.344 SYMLINK libspdk_trace_parser.so 00:02:12.344 CC lib/env_dpdk/env.o 00:02:12.344 CC lib/env_dpdk/memory.o 00:02:12.344 CC lib/env_dpdk/init.o 00:02:12.344 CC lib/env_dpdk/pci.o 00:02:12.344 CC lib/env_dpdk/threads.o 00:02:12.344 CC lib/vmd/led.o 00:02:12.344 CC lib/env_dpdk/pci_ioat.o 00:02:12.344 CC lib/vmd/vmd.o 00:02:12.344 CC lib/env_dpdk/pci_vmd.o 00:02:12.344 CC lib/env_dpdk/pci_virtio.o 00:02:12.344 CC lib/env_dpdk/pci_idxd.o 00:02:12.344 CC lib/env_dpdk/pci_event.o 00:02:12.344 CC lib/env_dpdk/sigbus_handler.o 00:02:12.344 CC lib/env_dpdk/pci_dpdk.o 00:02:12.344 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:12.344 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:12.344 CC lib/idxd/idxd.o 00:02:12.344 CC lib/idxd/idxd_user.o 00:02:12.344 CC lib/idxd/idxd_kernel.o 00:02:12.601 LIB libspdk_conf.a 00:02:12.601 SO libspdk_conf.so.5.0 00:02:12.601 LIB libspdk_json.a 00:02:12.601 LIB libspdk_rdma.a 00:02:12.601 SYMLINK libspdk_conf.so 00:02:12.601 SO libspdk_json.so.5.1 00:02:12.601 SO libspdk_rdma.so.5.0 00:02:12.601 SYMLINK libspdk_json.so 00:02:12.601 SYMLINK libspdk_rdma.so 00:02:12.859 LIB libspdk_idxd.a 00:02:12.859 LIB libspdk_reduce.a 00:02:12.859 SO libspdk_idxd.so.11.0 00:02:12.859 LIB libspdk_vmd.a 00:02:12.859 SO libspdk_reduce.so.5.0 00:02:12.859 SO libspdk_vmd.so.5.0 00:02:12.859 SYMLINK libspdk_idxd.so 00:02:12.859 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.859 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.859 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.859 SYMLINK libspdk_reduce.so 00:02:12.859 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.859 SYMLINK libspdk_vmd.so 00:02:13.117 LIB libspdk_jsonrpc.a 00:02:13.118 SO libspdk_jsonrpc.so.5.1 00:02:13.376 SYMLINK libspdk_jsonrpc.so 00:02:13.376 LIB libspdk_env_dpdk.a 00:02:13.376 SO libspdk_env_dpdk.so.13.0 00:02:13.376 CC lib/rpc/rpc.o 00:02:13.633 SYMLINK libspdk_env_dpdk.so 00:02:13.633 LIB libspdk_rpc.a 00:02:13.633 SO libspdk_rpc.so.5.0 00:02:13.633 SYMLINK libspdk_rpc.so 00:02:13.889 CC lib/trace/trace.o 00:02:13.889 CC lib/trace/trace_flags.o 00:02:13.889 CC lib/trace/trace_rpc.o 00:02:13.889 CC lib/sock/sock.o 00:02:13.889 CC lib/sock/sock_rpc.o 00:02:13.890 CC lib/notify/notify.o 00:02:13.890 CC lib/notify/notify_rpc.o 00:02:14.146 LIB libspdk_trace.a 00:02:14.146 LIB libspdk_notify.a 00:02:14.146 SO libspdk_trace.so.9.0 00:02:14.146 SO libspdk_notify.so.5.0 00:02:14.146 SYMLINK libspdk_trace.so 00:02:14.146 LIB libspdk_sock.a 00:02:14.403 SYMLINK libspdk_notify.so 00:02:14.403 SO libspdk_sock.so.8.0 00:02:14.403 SYMLINK libspdk_sock.so 00:02:14.403 CC lib/thread/thread.o 00:02:14.403 CC lib/thread/iobuf.o 00:02:14.660 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:14.660 CC lib/nvme/nvme_ctrlr.o 00:02:14.660 CC lib/nvme/nvme_fabric.o 00:02:14.660 CC lib/nvme/nvme_ns_cmd.o 00:02:14.660 CC lib/nvme/nvme_ns.o 00:02:14.660 CC lib/nvme/nvme_pcie_common.o 00:02:14.660 CC lib/nvme/nvme_pcie.o 00:02:14.660 CC lib/nvme/nvme_qpair.o 00:02:14.660 CC lib/nvme/nvme.o 00:02:14.660 CC lib/nvme/nvme_discovery.o 00:02:14.660 CC lib/nvme/nvme_quirks.o 00:02:14.660 CC lib/nvme/nvme_transport.o 00:02:14.660 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:14.660 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:14.660 CC lib/nvme/nvme_tcp.o 00:02:14.660 CC lib/nvme/nvme_opal.o 00:02:14.660 CC lib/nvme/nvme_io_msg.o 00:02:14.660 CC lib/nvme/nvme_poll_group.o 00:02:14.660 CC lib/nvme/nvme_zns.o 00:02:14.660 CC lib/nvme/nvme_cuse.o 00:02:14.660 CC lib/nvme/nvme_vfio_user.o 00:02:14.660 CC lib/nvme/nvme_rdma.o 00:02:15.591 LIB libspdk_thread.a 00:02:15.591 SO libspdk_thread.so.9.0 00:02:15.848 SYMLINK libspdk_thread.so 00:02:15.848 CC lib/blob/blobstore.o 00:02:15.848 CC lib/blob/request.o 00:02:15.848 CC lib/blob/zeroes.o 00:02:15.848 CC lib/blob/blob_bs_dev.o 00:02:16.106 CC lib/accel/accel_rpc.o 00:02:16.106 CC lib/accel/accel.o 00:02:16.106 CC lib/accel/accel_sw.o 00:02:16.106 CC lib/virtio/virtio_vhost_user.o 00:02:16.106 CC lib/virtio/virtio.o 00:02:16.106 CC lib/virtio/virtio_vfio_user.o 00:02:16.106 CC lib/init/json_config.o 00:02:16.106 CC lib/init/subsystem.o 00:02:16.106 CC lib/virtio/virtio_pci.o 00:02:16.106 CC lib/init/subsystem_rpc.o 00:02:16.106 CC lib/init/rpc.o 00:02:16.106 LIB libspdk_nvme.a 00:02:16.106 LIB libspdk_init.a 00:02:16.106 SO libspdk_init.so.4.0 00:02:16.362 SO libspdk_nvme.so.12.0 00:02:16.362 LIB libspdk_virtio.a 00:02:16.362 SYMLINK libspdk_init.so 00:02:16.362 SO libspdk_virtio.so.6.0 00:02:16.362 SYMLINK libspdk_virtio.so 00:02:16.619 SYMLINK libspdk_nvme.so 00:02:16.619 CC lib/event/app.o 00:02:16.619 CC lib/event/app_rpc.o 00:02:16.619 CC lib/event/reactor.o 00:02:16.619 CC lib/event/log_rpc.o 00:02:16.619 CC lib/event/scheduler_static.o 00:02:16.619 LIB libspdk_accel.a 00:02:16.876 SO libspdk_accel.so.14.0 00:02:16.876 LIB libspdk_event.a 00:02:16.876 SYMLINK libspdk_accel.so 00:02:16.876 SO libspdk_event.so.12.0 00:02:16.876 SYMLINK libspdk_event.so 00:02:17.134 CC lib/bdev/bdev.o 00:02:17.134 CC lib/bdev/bdev_rpc.o 00:02:17.134 CC lib/bdev/part.o 00:02:17.134 CC lib/bdev/bdev_zone.o 00:02:17.134 CC lib/bdev/scsi_nvme.o 00:02:18.064 LIB libspdk_blob.a 00:02:18.064 SO libspdk_blob.so.10.1 00:02:18.064 SYMLINK libspdk_blob.so 00:02:18.320 CC lib/blobfs/blobfs.o 00:02:18.320 CC lib/blobfs/tree.o 00:02:18.320 CC lib/lvol/lvol.o 00:02:18.884 LIB libspdk_bdev.a 00:02:18.884 LIB libspdk_blobfs.a 00:02:18.884 SO libspdk_bdev.so.14.0 00:02:18.884 SO libspdk_blobfs.so.9.0 00:02:18.884 LIB libspdk_lvol.a 00:02:18.884 SO libspdk_lvol.so.9.1 00:02:18.884 SYMLINK libspdk_blobfs.so 00:02:18.884 SYMLINK libspdk_bdev.so 00:02:18.884 SYMLINK libspdk_lvol.so 00:02:19.143 CC lib/ublk/ublk.o 00:02:19.143 CC lib/ublk/ublk_rpc.o 00:02:19.143 CC lib/nbd/nbd.o 00:02:19.143 CC lib/nbd/nbd_rpc.o 00:02:19.143 CC lib/scsi/dev.o 00:02:19.143 CC lib/scsi/lun.o 00:02:19.143 CC lib/scsi/port.o 00:02:19.143 CC lib/scsi/scsi.o 00:02:19.143 CC lib/scsi/scsi_bdev.o 00:02:19.143 CC lib/scsi/scsi_pr.o 00:02:19.143 CC lib/nvmf/ctrlr.o 00:02:19.143 CC lib/nvmf/ctrlr_discovery.o 00:02:19.143 CC lib/scsi/scsi_rpc.o 00:02:19.143 CC lib/nvmf/ctrlr_bdev.o 00:02:19.143 CC lib/scsi/task.o 00:02:19.143 CC lib/nvmf/subsystem.o 00:02:19.143 CC lib/nvmf/nvmf.o 00:02:19.143 CC lib/nvmf/nvmf_rpc.o 00:02:19.143 CC lib/nvmf/transport.o 00:02:19.143 CC lib/nvmf/tcp.o 00:02:19.143 CC lib/nvmf/rdma.o 00:02:19.143 CC lib/ftl/ftl_init.o 00:02:19.143 CC lib/ftl/ftl_core.o 00:02:19.143 CC lib/ftl/ftl_layout.o 00:02:19.144 CC lib/ftl/ftl_debug.o 00:02:19.144 CC lib/ftl/ftl_io.o 00:02:19.144 CC lib/ftl/ftl_sb.o 00:02:19.144 CC lib/ftl/ftl_l2p.o 00:02:19.144 CC lib/ftl/ftl_l2p_flat.o 00:02:19.144 CC lib/ftl/ftl_nv_cache.o 00:02:19.144 CC lib/ftl/ftl_band.o 00:02:19.144 CC lib/ftl/ftl_band_ops.o 00:02:19.144 CC lib/ftl/ftl_writer.o 00:02:19.144 CC lib/ftl/ftl_rq.o 00:02:19.144 CC lib/ftl/ftl_reloc.o 00:02:19.144 CC lib/ftl/ftl_l2p_cache.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt.o 00:02:19.144 CC lib/ftl/ftl_p2l.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:19.144 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:19.144 CC lib/ftl/utils/ftl_md.o 00:02:19.144 CC lib/ftl/utils/ftl_conf.o 00:02:19.144 CC lib/ftl/utils/ftl_mempool.o 00:02:19.144 CC lib/ftl/utils/ftl_property.o 00:02:19.144 CC lib/ftl/utils/ftl_bitmap.o 00:02:19.144 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:19.144 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:19.144 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:19.144 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:19.144 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:19.144 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:19.144 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:19.144 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:19.144 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:19.144 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:19.144 CC lib/ftl/base/ftl_base_dev.o 00:02:19.403 CC lib/ftl/ftl_trace.o 00:02:19.403 CC lib/ftl/base/ftl_base_bdev.o 00:02:19.662 LIB libspdk_nbd.a 00:02:19.662 SO libspdk_nbd.so.6.0 00:02:19.920 LIB libspdk_scsi.a 00:02:19.920 SYMLINK libspdk_nbd.so 00:02:19.920 SO libspdk_scsi.so.8.0 00:02:19.920 LIB libspdk_ublk.a 00:02:19.920 SO libspdk_ublk.so.2.0 00:02:19.920 SYMLINK libspdk_scsi.so 00:02:19.920 SYMLINK libspdk_ublk.so 00:02:20.178 LIB libspdk_ftl.a 00:02:20.178 CC lib/iscsi/iscsi.o 00:02:20.178 CC lib/iscsi/conn.o 00:02:20.178 CC lib/iscsi/param.o 00:02:20.178 CC lib/iscsi/init_grp.o 00:02:20.178 CC lib/iscsi/md5.o 00:02:20.178 CC lib/iscsi/portal_grp.o 00:02:20.178 CC lib/iscsi/tgt_node.o 00:02:20.178 CC lib/iscsi/iscsi_rpc.o 00:02:20.178 CC lib/iscsi/iscsi_subsystem.o 00:02:20.178 CC lib/iscsi/task.o 00:02:20.178 CC lib/vhost/vhost.o 00:02:20.178 CC lib/vhost/vhost_rpc.o 00:02:20.178 CC lib/vhost/vhost_scsi.o 00:02:20.178 CC lib/vhost/vhost_blk.o 00:02:20.178 CC lib/vhost/rte_vhost_user.o 00:02:20.178 SO libspdk_ftl.so.8.0 00:02:20.742 SYMLINK libspdk_ftl.so 00:02:20.999 LIB libspdk_nvmf.a 00:02:20.999 LIB libspdk_vhost.a 00:02:20.999 SO libspdk_nvmf.so.17.0 00:02:20.999 SO libspdk_vhost.so.7.1 00:02:21.257 SYMLINK libspdk_nvmf.so 00:02:21.257 SYMLINK libspdk_vhost.so 00:02:21.257 LIB libspdk_iscsi.a 00:02:21.257 SO libspdk_iscsi.so.7.0 00:02:21.257 SYMLINK libspdk_iscsi.so 00:02:21.824 CC module/env_dpdk/env_dpdk_rpc.o 00:02:21.824 CC module/scheduler/gscheduler/gscheduler.o 00:02:21.824 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:21.824 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev.o 00:02:21.824 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:21.824 CC module/accel/dpdk_cryptodev/accel_dpdk_cryptodev_rpc.o 00:02:21.824 CC module/sock/posix/posix.o 00:02:21.824 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev_rpc.o 00:02:21.824 CC module/accel/dpdk_compressdev/accel_dpdk_compressdev.o 00:02:21.824 CC module/accel/ioat/accel_ioat.o 00:02:21.824 CC module/accel/ioat/accel_ioat_rpc.o 00:02:21.824 CC module/accel/error/accel_error.o 00:02:21.824 CC module/accel/error/accel_error_rpc.o 00:02:21.824 CC module/blob/bdev/blob_bdev.o 00:02:21.824 CC module/accel/dsa/accel_dsa.o 00:02:21.824 CC module/accel/dsa/accel_dsa_rpc.o 00:02:21.824 CC module/accel/iaa/accel_iaa.o 00:02:21.824 CC module/accel/iaa/accel_iaa_rpc.o 00:02:21.824 LIB libspdk_env_dpdk_rpc.a 00:02:21.824 SO libspdk_env_dpdk_rpc.so.5.0 00:02:21.824 LIB libspdk_scheduler_gscheduler.a 00:02:22.082 LIB libspdk_scheduler_dpdk_governor.a 00:02:22.082 LIB libspdk_scheduler_dynamic.a 00:02:22.082 SO libspdk_scheduler_gscheduler.so.3.0 00:02:22.082 SYMLINK libspdk_env_dpdk_rpc.so 00:02:22.082 SO libspdk_scheduler_dpdk_governor.so.3.0 00:02:22.082 SO libspdk_scheduler_dynamic.so.3.0 00:02:22.082 LIB libspdk_accel_ioat.a 00:02:22.082 LIB libspdk_accel_error.a 00:02:22.082 SYMLINK libspdk_scheduler_gscheduler.so 00:02:22.082 SO libspdk_accel_ioat.so.5.0 00:02:22.082 LIB libspdk_accel_iaa.a 00:02:22.082 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:22.082 SO libspdk_accel_error.so.1.0 00:02:22.082 SYMLINK libspdk_scheduler_dynamic.so 00:02:22.082 LIB libspdk_accel_dsa.a 00:02:22.082 LIB libspdk_blob_bdev.a 00:02:22.082 SO libspdk_accel_iaa.so.2.0 00:02:22.082 SO libspdk_accel_dsa.so.4.0 00:02:22.082 SO libspdk_blob_bdev.so.10.1 00:02:22.082 SYMLINK libspdk_accel_ioat.so 00:02:22.082 SYMLINK libspdk_accel_error.so 00:02:22.082 SYMLINK libspdk_accel_iaa.so 00:02:22.082 SYMLINK libspdk_blob_bdev.so 00:02:22.082 SYMLINK libspdk_accel_dsa.so 00:02:22.340 LIB libspdk_sock_posix.a 00:02:22.340 SO libspdk_sock_posix.so.5.0 00:02:22.598 CC module/blobfs/bdev/blobfs_bdev.o 00:02:22.598 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:22.598 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:22.598 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:22.598 CC module/bdev/malloc/bdev_malloc.o 00:02:22.598 CC module/bdev/delay/vbdev_delay.o 00:02:22.598 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:22.598 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:22.598 SYMLINK libspdk_sock_posix.so 00:02:22.598 CC module/bdev/split/vbdev_split.o 00:02:22.598 CC module/bdev/aio/bdev_aio_rpc.o 00:02:22.598 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:22.598 CC module/bdev/aio/bdev_aio.o 00:02:22.598 CC module/bdev/error/vbdev_error.o 00:02:22.598 CC module/bdev/ftl/bdev_ftl.o 00:02:22.598 CC module/bdev/error/vbdev_error_rpc.o 00:02:22.598 CC module/bdev/raid/bdev_raid.o 00:02:22.598 CC module/bdev/split/vbdev_split_rpc.o 00:02:22.598 CC module/bdev/raid/bdev_raid_rpc.o 00:02:22.598 CC module/bdev/raid/bdev_raid_sb.o 00:02:22.598 CC module/bdev/raid/raid0.o 00:02:22.598 CC module/bdev/lvol/vbdev_lvol.o 00:02:22.598 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:22.598 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:22.598 CC module/bdev/raid/concat.o 00:02:22.598 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:22.598 CC module/bdev/raid/raid1.o 00:02:22.598 CC module/bdev/passthru/vbdev_passthru.o 00:02:22.598 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:22.598 CC module/bdev/gpt/vbdev_gpt.o 00:02:22.598 CC module/bdev/gpt/gpt.o 00:02:22.598 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:22.598 CC module/bdev/nvme/bdev_nvme.o 00:02:22.598 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:22.598 CC module/bdev/null/bdev_null.o 00:02:22.599 CC module/bdev/nvme/bdev_mdns_client.o 00:02:22.599 CC module/bdev/nvme/nvme_rpc.o 00:02:22.599 CC module/bdev/null/bdev_null_rpc.o 00:02:22.599 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:22.599 CC module/bdev/crypto/vbdev_crypto_rpc.o 00:02:22.599 CC module/bdev/nvme/vbdev_opal.o 00:02:22.599 CC module/bdev/crypto/vbdev_crypto.o 00:02:22.599 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:22.599 CC module/bdev/iscsi/bdev_iscsi.o 00:02:22.599 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:22.599 CC module/bdev/compress/vbdev_compress.o 00:02:22.599 CC module/bdev/compress/vbdev_compress_rpc.o 00:02:22.599 LIB libspdk_accel_dpdk_compressdev.a 00:02:22.599 SO libspdk_accel_dpdk_compressdev.so.2.0 00:02:22.856 SYMLINK libspdk_accel_dpdk_compressdev.so 00:02:22.856 LIB libspdk_blobfs_bdev.a 00:02:22.856 LIB libspdk_bdev_split.a 00:02:22.856 LIB libspdk_bdev_error.a 00:02:22.856 LIB libspdk_bdev_ftl.a 00:02:22.856 LIB libspdk_bdev_gpt.a 00:02:22.856 SO libspdk_blobfs_bdev.so.5.0 00:02:22.856 SO libspdk_bdev_split.so.5.0 00:02:22.856 SO libspdk_bdev_error.so.5.0 00:02:22.856 LIB libspdk_bdev_passthru.a 00:02:22.856 LIB libspdk_bdev_zone_block.a 00:02:22.856 SO libspdk_bdev_ftl.so.5.0 00:02:22.856 LIB libspdk_bdev_aio.a 00:02:22.856 SO libspdk_bdev_gpt.so.5.0 00:02:22.856 SYMLINK libspdk_blobfs_bdev.so 00:02:22.857 SO libspdk_bdev_zone_block.so.5.0 00:02:22.857 SYMLINK libspdk_bdev_split.so 00:02:22.857 SYMLINK libspdk_bdev_error.so 00:02:22.857 SO libspdk_bdev_passthru.so.5.0 00:02:22.857 SO libspdk_bdev_aio.so.5.0 00:02:22.857 LIB libspdk_bdev_null.a 00:02:22.857 SYMLINK libspdk_bdev_gpt.so 00:02:22.857 SYMLINK libspdk_bdev_ftl.so 00:02:22.857 SO libspdk_bdev_null.so.5.0 00:02:22.857 SYMLINK libspdk_bdev_zone_block.so 00:02:22.857 SYMLINK libspdk_bdev_passthru.so 00:02:22.857 LIB libspdk_accel_dpdk_cryptodev.a 00:02:22.857 SYMLINK libspdk_bdev_aio.so 00:02:22.857 LIB libspdk_bdev_crypto.a 00:02:22.857 LIB libspdk_bdev_delay.a 00:02:22.857 LIB libspdk_bdev_iscsi.a 00:02:22.857 LIB libspdk_bdev_compress.a 00:02:22.857 SO libspdk_accel_dpdk_cryptodev.so.2.0 00:02:22.857 LIB libspdk_bdev_malloc.a 00:02:22.857 LIB libspdk_bdev_lvol.a 00:02:23.123 SYMLINK libspdk_bdev_null.so 00:02:23.123 SO libspdk_bdev_delay.so.5.0 00:02:23.123 SO libspdk_bdev_crypto.so.5.0 00:02:23.123 SO libspdk_bdev_iscsi.so.5.0 00:02:23.123 SO libspdk_bdev_compress.so.5.0 00:02:23.123 SYMLINK libspdk_accel_dpdk_cryptodev.so 00:02:23.123 SO libspdk_bdev_malloc.so.5.0 00:02:23.123 SO libspdk_bdev_lvol.so.5.0 00:02:23.123 SYMLINK libspdk_bdev_delay.so 00:02:23.123 SYMLINK libspdk_bdev_crypto.so 00:02:23.123 SYMLINK libspdk_bdev_iscsi.so 00:02:23.123 SYMLINK libspdk_bdev_compress.so 00:02:23.123 SYMLINK libspdk_bdev_malloc.so 00:02:23.123 SYMLINK libspdk_bdev_lvol.so 00:02:23.123 LIB libspdk_bdev_virtio.a 00:02:23.123 SO libspdk_bdev_virtio.so.5.0 00:02:23.123 SYMLINK libspdk_bdev_virtio.so 00:02:23.422 LIB libspdk_bdev_raid.a 00:02:23.422 SO libspdk_bdev_raid.so.5.0 00:02:23.422 SYMLINK libspdk_bdev_raid.so 00:02:24.355 LIB libspdk_bdev_nvme.a 00:02:24.355 SO libspdk_bdev_nvme.so.6.0 00:02:24.355 SYMLINK libspdk_bdev_nvme.so 00:02:24.919 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:24.919 CC module/event/subsystems/vmd/vmd.o 00:02:24.919 CC module/event/subsystems/scheduler/scheduler.o 00:02:24.919 CC module/event/subsystems/iobuf/iobuf.o 00:02:24.919 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:24.919 CC module/event/subsystems/sock/sock.o 00:02:24.919 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:24.919 LIB libspdk_event_scheduler.a 00:02:24.919 LIB libspdk_event_sock.a 00:02:24.919 LIB libspdk_event_vmd.a 00:02:24.919 LIB libspdk_event_vhost_blk.a 00:02:24.919 SO libspdk_event_sock.so.4.0 00:02:24.919 LIB libspdk_event_iobuf.a 00:02:24.919 SO libspdk_event_scheduler.so.3.0 00:02:24.919 SO libspdk_event_vhost_blk.so.2.0 00:02:24.919 SO libspdk_event_vmd.so.5.0 00:02:24.919 SO libspdk_event_iobuf.so.2.0 00:02:25.177 SYMLINK libspdk_event_sock.so 00:02:25.177 SYMLINK libspdk_event_scheduler.so 00:02:25.177 SYMLINK libspdk_event_vhost_blk.so 00:02:25.177 SYMLINK libspdk_event_vmd.so 00:02:25.177 SYMLINK libspdk_event_iobuf.so 00:02:25.177 CC module/event/subsystems/accel/accel.o 00:02:25.436 LIB libspdk_event_accel.a 00:02:25.436 SO libspdk_event_accel.so.5.0 00:02:25.436 SYMLINK libspdk_event_accel.so 00:02:25.694 CC module/event/subsystems/bdev/bdev.o 00:02:25.953 LIB libspdk_event_bdev.a 00:02:25.953 SO libspdk_event_bdev.so.5.0 00:02:25.953 SYMLINK libspdk_event_bdev.so 00:02:26.209 CC module/event/subsystems/scsi/scsi.o 00:02:26.209 CC module/event/subsystems/ublk/ublk.o 00:02:26.209 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:26.209 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:26.209 CC module/event/subsystems/nbd/nbd.o 00:02:26.209 LIB libspdk_event_scsi.a 00:02:26.467 LIB libspdk_event_ublk.a 00:02:26.467 LIB libspdk_event_nbd.a 00:02:26.467 SO libspdk_event_ublk.so.2.0 00:02:26.467 SO libspdk_event_scsi.so.5.0 00:02:26.467 SO libspdk_event_nbd.so.5.0 00:02:26.467 LIB libspdk_event_nvmf.a 00:02:26.467 SYMLINK libspdk_event_ublk.so 00:02:26.467 SYMLINK libspdk_event_scsi.so 00:02:26.467 SO libspdk_event_nvmf.so.5.0 00:02:26.467 SYMLINK libspdk_event_nbd.so 00:02:26.467 SYMLINK libspdk_event_nvmf.so 00:02:26.724 CC module/event/subsystems/iscsi/iscsi.o 00:02:26.724 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:26.724 LIB libspdk_event_vhost_scsi.a 00:02:26.724 LIB libspdk_event_iscsi.a 00:02:26.724 SO libspdk_event_vhost_scsi.so.2.0 00:02:26.724 SO libspdk_event_iscsi.so.5.0 00:02:26.982 SYMLINK libspdk_event_vhost_scsi.so 00:02:26.982 SYMLINK libspdk_event_iscsi.so 00:02:26.982 SO libspdk.so.5.0 00:02:26.982 SYMLINK libspdk.so 00:02:27.245 CXX app/trace/trace.o 00:02:27.245 CC app/trace_record/trace_record.o 00:02:27.245 CC app/spdk_lspci/spdk_lspci.o 00:02:27.245 CC app/spdk_nvme_perf/perf.o 00:02:27.245 CC app/spdk_nvme_identify/identify.o 00:02:27.245 CC app/spdk_top/spdk_top.o 00:02:27.245 CC app/spdk_nvme_discover/discovery_aer.o 00:02:27.245 TEST_HEADER include/spdk/accel.h 00:02:27.245 TEST_HEADER include/spdk/accel_module.h 00:02:27.245 CC test/rpc_client/rpc_client_test.o 00:02:27.245 TEST_HEADER include/spdk/assert.h 00:02:27.245 TEST_HEADER include/spdk/barrier.h 00:02:27.245 TEST_HEADER include/spdk/base64.h 00:02:27.245 TEST_HEADER include/spdk/bdev_module.h 00:02:27.245 TEST_HEADER include/spdk/bdev.h 00:02:27.245 TEST_HEADER include/spdk/bdev_zone.h 00:02:27.245 TEST_HEADER include/spdk/bit_array.h 00:02:27.245 TEST_HEADER include/spdk/bit_pool.h 00:02:27.245 TEST_HEADER include/spdk/blob_bdev.h 00:02:27.245 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:27.245 TEST_HEADER include/spdk/blobfs.h 00:02:27.245 TEST_HEADER include/spdk/blob.h 00:02:27.245 TEST_HEADER include/spdk/conf.h 00:02:27.245 TEST_HEADER include/spdk/config.h 00:02:27.245 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:27.245 TEST_HEADER include/spdk/cpuset.h 00:02:27.245 TEST_HEADER include/spdk/crc16.h 00:02:27.245 CC app/spdk_dd/spdk_dd.o 00:02:27.245 TEST_HEADER include/spdk/crc32.h 00:02:27.245 TEST_HEADER include/spdk/crc64.h 00:02:27.245 CC app/iscsi_tgt/iscsi_tgt.o 00:02:27.506 CC app/nvmf_tgt/nvmf_main.o 00:02:27.506 TEST_HEADER include/spdk/dif.h 00:02:27.506 CC app/vhost/vhost.o 00:02:27.506 TEST_HEADER include/spdk/dma.h 00:02:27.506 TEST_HEADER include/spdk/endian.h 00:02:27.506 TEST_HEADER include/spdk/env_dpdk.h 00:02:27.506 TEST_HEADER include/spdk/env.h 00:02:27.506 CC examples/nvme/hotplug/hotplug.o 00:02:27.506 CC examples/nvme/reconnect/reconnect.o 00:02:27.506 CC examples/vmd/lsvmd/lsvmd.o 00:02:27.506 CC examples/ioat/verify/verify.o 00:02:27.506 TEST_HEADER include/spdk/event.h 00:02:27.506 CC examples/nvme/abort/abort.o 00:02:27.506 TEST_HEADER include/spdk/fd_group.h 00:02:27.506 CC examples/nvme/hello_world/hello_world.o 00:02:27.506 CC test/event/reactor/reactor.o 00:02:27.506 CC test/env/pci/pci_ut.o 00:02:27.506 CC examples/vmd/led/led.o 00:02:27.506 TEST_HEADER include/spdk/fd.h 00:02:27.506 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.506 CC test/event/reactor_perf/reactor_perf.o 00:02:27.506 CC test/app/jsoncat/jsoncat.o 00:02:27.506 CC examples/ioat/perf/perf.o 00:02:27.506 TEST_HEADER include/spdk/file.h 00:02:27.506 CC examples/nvme/arbitration/arbitration.o 00:02:27.506 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.507 TEST_HEADER include/spdk/ftl.h 00:02:27.507 CC test/app/histogram_perf/histogram_perf.o 00:02:27.507 CC test/env/memory/memory_ut.o 00:02:27.507 CC test/nvme/aer/aer.o 00:02:27.507 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:27.507 CC test/nvme/e2edp/nvme_dp.o 00:02:27.507 TEST_HEADER include/spdk/gpt_spec.h 00:02:27.507 TEST_HEADER include/spdk/hexlify.h 00:02:27.507 CC app/fio/nvme/fio_plugin.o 00:02:27.507 CC test/event/event_perf/event_perf.o 00:02:27.507 CC test/nvme/reset/reset.o 00:02:27.507 CC app/spdk_tgt/spdk_tgt.o 00:02:27.507 TEST_HEADER include/spdk/histogram_data.h 00:02:27.507 CC examples/accel/perf/accel_perf.o 00:02:27.507 CC test/app/stub/stub.o 00:02:27.507 CC test/env/vtophys/vtophys.o 00:02:27.507 TEST_HEADER include/spdk/idxd.h 00:02:27.507 CC test/nvme/sgl/sgl.o 00:02:27.507 TEST_HEADER include/spdk/idxd_spec.h 00:02:27.507 CC test/nvme/overhead/overhead.o 00:02:27.507 CC test/nvme/boot_partition/boot_partition.o 00:02:27.507 TEST_HEADER include/spdk/init.h 00:02:27.507 CC test/nvme/compliance/nvme_compliance.o 00:02:27.507 CC test/nvme/simple_copy/simple_copy.o 00:02:27.507 TEST_HEADER include/spdk/ioat.h 00:02:27.507 CC examples/idxd/perf/perf.o 00:02:27.507 CC test/nvme/startup/startup.o 00:02:27.507 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.507 CC test/nvme/connect_stress/connect_stress.o 00:02:27.507 CC test/event/app_repeat/app_repeat.o 00:02:27.507 CC test/thread/poller_perf/poller_perf.o 00:02:27.507 CC test/nvme/reserve/reserve.o 00:02:27.507 TEST_HEADER include/spdk/ioat_spec.h 00:02:27.507 CC test/nvme/err_injection/err_injection.o 00:02:27.507 CC examples/sock/hello_world/hello_sock.o 00:02:27.507 TEST_HEADER include/spdk/iscsi_spec.h 00:02:27.507 CC examples/util/zipf/zipf.o 00:02:27.507 TEST_HEADER include/spdk/json.h 00:02:27.507 TEST_HEADER include/spdk/jsonrpc.h 00:02:27.507 CC test/blobfs/mkfs/mkfs.o 00:02:27.507 CC examples/blob/cli/blobcli.o 00:02:27.507 CC test/dma/test_dma/test_dma.o 00:02:27.507 TEST_HEADER include/spdk/likely.h 00:02:27.507 CC examples/bdev/bdevperf/bdevperf.o 00:02:27.507 TEST_HEADER include/spdk/log.h 00:02:27.507 CC test/accel/dif/dif.o 00:02:27.507 TEST_HEADER include/spdk/lvol.h 00:02:27.507 TEST_HEADER include/spdk/memory.h 00:02:27.507 CC app/fio/bdev/fio_plugin.o 00:02:27.507 CC examples/blob/hello_world/hello_blob.o 00:02:27.507 TEST_HEADER include/spdk/mmio.h 00:02:27.507 TEST_HEADER include/spdk/nbd.h 00:02:27.507 CC test/event/scheduler/scheduler.o 00:02:27.507 TEST_HEADER include/spdk/notify.h 00:02:27.507 CC examples/bdev/hello_world/hello_bdev.o 00:02:27.507 CC test/app/bdev_svc/bdev_svc.o 00:02:27.507 CC test/bdev/bdevio/bdevio.o 00:02:27.507 TEST_HEADER include/spdk/nvme.h 00:02:27.507 TEST_HEADER include/spdk/nvme_intel.h 00:02:27.507 CC examples/nvmf/nvmf/nvmf.o 00:02:27.507 CC examples/thread/thread/thread_ex.o 00:02:27.507 CC test/lvol/esnap/esnap.o 00:02:27.507 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:27.507 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:27.507 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:27.507 TEST_HEADER include/spdk/nvme_spec.h 00:02:27.507 TEST_HEADER include/spdk/nvme_zns.h 00:02:27.507 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:27.507 CC test/env/mem_callbacks/mem_callbacks.o 00:02:27.507 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:27.507 TEST_HEADER include/spdk/nvmf.h 00:02:27.507 LINK spdk_lspci 00:02:27.507 TEST_HEADER include/spdk/nvmf_spec.h 00:02:27.507 TEST_HEADER include/spdk/nvmf_transport.h 00:02:27.507 TEST_HEADER include/spdk/opal.h 00:02:27.507 TEST_HEADER include/spdk/opal_spec.h 00:02:27.507 TEST_HEADER include/spdk/pci_ids.h 00:02:27.507 TEST_HEADER include/spdk/pipe.h 00:02:27.507 TEST_HEADER include/spdk/queue.h 00:02:27.507 TEST_HEADER include/spdk/reduce.h 00:02:27.507 TEST_HEADER include/spdk/rpc.h 00:02:27.507 TEST_HEADER include/spdk/scheduler.h 00:02:27.507 TEST_HEADER include/spdk/scsi.h 00:02:27.507 TEST_HEADER include/spdk/scsi_spec.h 00:02:27.507 LINK rpc_client_test 00:02:27.769 TEST_HEADER include/spdk/sock.h 00:02:27.769 TEST_HEADER include/spdk/stdinc.h 00:02:27.769 TEST_HEADER include/spdk/string.h 00:02:27.769 TEST_HEADER include/spdk/thread.h 00:02:27.769 TEST_HEADER include/spdk/trace.h 00:02:27.769 TEST_HEADER include/spdk/trace_parser.h 00:02:27.769 LINK spdk_nvme_discover 00:02:27.769 TEST_HEADER include/spdk/tree.h 00:02:27.769 TEST_HEADER include/spdk/ublk.h 00:02:27.769 TEST_HEADER include/spdk/util.h 00:02:27.769 TEST_HEADER include/spdk/uuid.h 00:02:27.769 TEST_HEADER include/spdk/version.h 00:02:27.769 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:27.769 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:27.769 LINK reactor 00:02:27.769 TEST_HEADER include/spdk/vhost.h 00:02:27.769 TEST_HEADER include/spdk/vmd.h 00:02:27.769 LINK led 00:02:27.769 LINK spdk_trace_record 00:02:27.769 TEST_HEADER include/spdk/xor.h 00:02:27.769 LINK reactor_perf 00:02:27.769 TEST_HEADER include/spdk/zipf.h 00:02:27.769 CXX test/cpp_headers/accel.o 00:02:27.769 LINK interrupt_tgt 00:02:27.769 LINK histogram_perf 00:02:27.769 LINK event_perf 00:02:27.769 LINK lsvmd 00:02:27.769 LINK env_dpdk_post_init 00:02:27.769 LINK app_repeat 00:02:27.769 LINK iscsi_tgt 00:02:27.769 LINK cmb_copy 00:02:27.769 LINK jsoncat 00:02:27.769 LINK nvmf_tgt 00:02:27.769 LINK zipf 00:02:27.769 LINK vhost 00:02:27.769 LINK vtophys 00:02:27.769 LINK poller_perf 00:02:27.769 LINK connect_stress 00:02:27.769 LINK verify 00:02:27.769 LINK boot_partition 00:02:27.769 LINK stub 00:02:27.769 LINK startup 00:02:27.769 LINK ioat_perf 00:02:27.769 LINK hotplug 00:02:27.769 LINK pmr_persistence 00:02:27.769 LINK bdev_svc 00:02:27.769 LINK spdk_tgt 00:02:27.769 LINK hello_world 00:02:27.769 LINK err_injection 00:02:27.769 LINK reserve 00:02:27.769 LINK reset 00:02:27.769 LINK mkfs 00:02:28.035 LINK hello_blob 00:02:28.035 LINK simple_copy 00:02:28.035 LINK sgl 00:02:28.035 LINK aer 00:02:28.035 LINK nvme_dp 00:02:28.035 LINK hello_sock 00:02:28.035 LINK spdk_dd 00:02:28.035 LINK scheduler 00:02:28.035 LINK overhead 00:02:28.035 LINK hello_bdev 00:02:28.035 LINK nvme_compliance 00:02:28.035 LINK arbitration 00:02:28.035 LINK thread 00:02:28.035 LINK abort 00:02:28.035 LINK idxd_perf 00:02:28.035 LINK reconnect 00:02:28.035 LINK pci_ut 00:02:28.035 CXX test/cpp_headers/accel_module.o 00:02:28.035 LINK nvmf 00:02:28.035 LINK spdk_trace 00:02:28.035 CXX test/cpp_headers/assert.o 00:02:28.035 LINK test_dma 00:02:28.035 CXX test/cpp_headers/barrier.o 00:02:28.035 CC test/nvme/fused_ordering/fused_ordering.o 00:02:28.035 CXX test/cpp_headers/base64.o 00:02:28.035 CXX test/cpp_headers/bdev.o 00:02:28.035 LINK bdevio 00:02:28.035 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:28.035 CC test/nvme/fdp/fdp.o 00:02:28.035 LINK dif 00:02:28.301 CXX test/cpp_headers/bdev_module.o 00:02:28.301 CC test/nvme/cuse/cuse.o 00:02:28.301 CXX test/cpp_headers/bdev_zone.o 00:02:28.301 CXX test/cpp_headers/bit_array.o 00:02:28.301 CXX test/cpp_headers/bit_pool.o 00:02:28.301 CXX test/cpp_headers/blob_bdev.o 00:02:28.301 CXX test/cpp_headers/blobfs_bdev.o 00:02:28.301 CXX test/cpp_headers/blobfs.o 00:02:28.301 CXX test/cpp_headers/blob.o 00:02:28.301 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:28.301 CXX test/cpp_headers/conf.o 00:02:28.301 CXX test/cpp_headers/config.o 00:02:28.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:28.301 CXX test/cpp_headers/cpuset.o 00:02:28.301 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:28.301 CXX test/cpp_headers/crc16.o 00:02:28.301 CXX test/cpp_headers/crc64.o 00:02:28.301 CXX test/cpp_headers/crc32.o 00:02:28.301 CXX test/cpp_headers/dif.o 00:02:28.301 LINK accel_perf 00:02:28.301 CXX test/cpp_headers/dma.o 00:02:28.301 LINK nvme_fuzz 00:02:28.301 CXX test/cpp_headers/endian.o 00:02:28.301 CXX test/cpp_headers/env_dpdk.o 00:02:28.301 CXX test/cpp_headers/env.o 00:02:28.301 CXX test/cpp_headers/event.o 00:02:28.301 CXX test/cpp_headers/fd_group.o 00:02:28.301 LINK nvme_manage 00:02:28.301 CXX test/cpp_headers/fd.o 00:02:28.301 CXX test/cpp_headers/file.o 00:02:28.301 CXX test/cpp_headers/ftl.o 00:02:28.301 LINK spdk_bdev 00:02:28.301 CXX test/cpp_headers/gpt_spec.o 00:02:28.301 LINK spdk_nvme 00:02:28.301 CXX test/cpp_headers/hexlify.o 00:02:28.301 CXX test/cpp_headers/histogram_data.o 00:02:28.301 LINK blobcli 00:02:28.301 CXX test/cpp_headers/idxd.o 00:02:28.301 CXX test/cpp_headers/idxd_spec.o 00:02:28.301 CXX test/cpp_headers/init.o 00:02:28.301 CXX test/cpp_headers/ioat.o 00:02:28.301 CXX test/cpp_headers/ioat_spec.o 00:02:28.301 CXX test/cpp_headers/json.o 00:02:28.301 CXX test/cpp_headers/iscsi_spec.o 00:02:28.301 CXX test/cpp_headers/jsonrpc.o 00:02:28.301 CXX test/cpp_headers/likely.o 00:02:28.301 CXX test/cpp_headers/lvol.o 00:02:28.301 CXX test/cpp_headers/log.o 00:02:28.301 CXX test/cpp_headers/memory.o 00:02:28.301 CXX test/cpp_headers/mmio.o 00:02:28.301 CXX test/cpp_headers/nbd.o 00:02:28.301 CXX test/cpp_headers/notify.o 00:02:28.301 CXX test/cpp_headers/nvme.o 00:02:28.564 LINK mem_callbacks 00:02:28.564 CXX test/cpp_headers/nvme_intel.o 00:02:28.564 CXX test/cpp_headers/nvme_ocssd.o 00:02:28.564 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:28.564 CXX test/cpp_headers/nvme_spec.o 00:02:28.564 CXX test/cpp_headers/nvme_zns.o 00:02:28.564 LINK fused_ordering 00:02:28.564 CXX test/cpp_headers/nvmf_cmd.o 00:02:28.564 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:28.564 CXX test/cpp_headers/nvmf.o 00:02:28.564 CXX test/cpp_headers/nvmf_spec.o 00:02:28.564 CXX test/cpp_headers/nvmf_transport.o 00:02:28.564 CXX test/cpp_headers/opal.o 00:02:28.564 CXX test/cpp_headers/opal_spec.o 00:02:28.564 CXX test/cpp_headers/pci_ids.o 00:02:28.564 CXX test/cpp_headers/pipe.o 00:02:28.564 CXX test/cpp_headers/queue.o 00:02:28.564 CXX test/cpp_headers/reduce.o 00:02:28.564 CXX test/cpp_headers/rpc.o 00:02:28.564 CXX test/cpp_headers/scheduler.o 00:02:28.564 CXX test/cpp_headers/scsi.o 00:02:28.564 LINK doorbell_aers 00:02:28.564 CXX test/cpp_headers/scsi_spec.o 00:02:28.564 CXX test/cpp_headers/stdinc.o 00:02:28.564 CXX test/cpp_headers/sock.o 00:02:28.564 LINK spdk_nvme_identify 00:02:28.564 CXX test/cpp_headers/thread.o 00:02:28.564 CXX test/cpp_headers/string.o 00:02:28.564 CXX test/cpp_headers/trace.o 00:02:28.564 CXX test/cpp_headers/trace_parser.o 00:02:28.564 CXX test/cpp_headers/tree.o 00:02:28.564 LINK spdk_nvme_perf 00:02:28.823 LINK spdk_top 00:02:28.823 CXX test/cpp_headers/ublk.o 00:02:28.823 CXX test/cpp_headers/util.o 00:02:28.823 CXX test/cpp_headers/uuid.o 00:02:28.823 CXX test/cpp_headers/version.o 00:02:28.823 CXX test/cpp_headers/vfio_user_pci.o 00:02:28.823 CXX test/cpp_headers/vfio_user_spec.o 00:02:28.823 CXX test/cpp_headers/vhost.o 00:02:28.823 CXX test/cpp_headers/vmd.o 00:02:28.823 CXX test/cpp_headers/xor.o 00:02:28.823 CXX test/cpp_headers/zipf.o 00:02:28.823 LINK bdevperf 00:02:28.823 LINK fdp 00:02:29.081 LINK memory_ut 00:02:29.081 LINK vhost_fuzz 00:02:29.340 LINK cuse 00:02:29.918 LINK iscsi_fuzz 00:02:31.822 LINK esnap 00:02:31.822 00:02:31.822 real 1m10.416s 00:02:31.823 user 14m12.821s 00:02:31.823 sys 3m37.272s 00:02:31.823 11:54:39 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:31.823 11:54:39 -- common/autotest_common.sh@10 -- $ set +x 00:02:31.823 ************************************ 00:02:31.823 END TEST make 00:02:31.823 ************************************ 00:02:32.082 11:54:39 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:02:32.082 11:54:39 -- nvmf/common.sh@7 -- # uname -s 00:02:32.082 11:54:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:32.082 11:54:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:32.082 11:54:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:32.082 11:54:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:32.082 11:54:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:32.082 11:54:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:32.082 11:54:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:32.082 11:54:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:32.082 11:54:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:32.082 11:54:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:32.082 11:54:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d40ca9-2a78-e711-906e-0017a4403562 00:02:32.082 11:54:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d40ca9-2a78-e711-906e-0017a4403562 00:02:32.082 11:54:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:32.082 11:54:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:32.082 11:54:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:02:32.082 11:54:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:02:32.082 11:54:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:32.082 11:54:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.082 11:54:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.082 11:54:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.082 11:54:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.082 11:54:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.082 11:54:39 -- paths/export.sh@5 -- # export PATH 00:02:32.082 11:54:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.082 11:54:39 -- nvmf/common.sh@46 -- # : 0 00:02:32.082 11:54:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:02:32.082 11:54:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:02:32.082 11:54:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:02:32.082 11:54:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:32.082 11:54:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:32.082 11:54:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:02:32.082 11:54:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:02:32.082 11:54:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:02:32.082 11:54:39 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:32.082 11:54:39 -- spdk/autotest.sh@32 -- # uname -s 00:02:32.082 11:54:39 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:32.082 11:54:39 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:32.082 11:54:39 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:32.082 11:54:39 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:32.082 11:54:39 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/coredumps 00:02:32.082 11:54:39 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:32.082 11:54:39 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:32.082 11:54:39 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:32.082 11:54:39 -- spdk/autotest.sh@48 -- # udevadm_pid=1130166 00:02:32.082 11:54:39 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:02:32.082 11:54:39 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:32.082 11:54:39 -- spdk/autotest.sh@54 -- # echo 1130168 00:02:32.082 11:54:39 -- spdk/autotest.sh@56 -- # echo 1130169 00:02:32.082 11:54:39 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:02:32.082 11:54:39 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:02:32.082 11:54:39 -- spdk/autotest.sh@60 -- # echo 1130170 00:02:32.082 11:54:39 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power 00:02:32.082 11:54:39 -- spdk/autotest.sh@62 -- # echo 1130171 00:02:32.082 11:54:39 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.082 11:54:39 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:02:32.082 11:54:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:32.082 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:02:32.082 11:54:39 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l 00:02:32.082 11:54:39 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power -l 00:02:32.082 11:54:39 -- spdk/autotest.sh@70 -- # create_test_list 00:02:32.082 11:54:39 -- common/autotest_common.sh@736 -- # xtrace_disable 00:02:32.082 11:54:39 -- common/autotest_common.sh@10 -- # set +x 00:02:32.082 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:02:32.082 Redirecting to /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:02:32.082 11:54:39 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh 00:02:32.082 11:54:39 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:32.082 11:54:39 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:32.082 11:54:39 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:02:32.082 11:54:39 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/crypto-phy-autotest/spdk 00:02:32.082 11:54:39 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:02:32.082 11:54:39 -- common/autotest_common.sh@1440 -- # uname 00:02:32.082 11:54:39 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:02:32.082 11:54:39 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:02:32.082 11:54:39 -- common/autotest_common.sh@1460 -- # uname 00:02:32.082 11:54:39 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:02:32.082 11:54:39 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:02:32.082 11:54:39 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:02:32.082 11:54:39 -- spdk/autotest.sh@83 -- # hash lcov 00:02:32.082 11:54:39 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:32.082 11:54:39 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:02:32.082 --rc lcov_branch_coverage=1 00:02:32.082 --rc lcov_function_coverage=1 00:02:32.082 --rc genhtml_branch_coverage=1 00:02:32.082 --rc genhtml_function_coverage=1 00:02:32.082 --rc genhtml_legend=1 00:02:32.082 --rc geninfo_all_blocks=1 00:02:32.082 ' 00:02:32.082 11:54:39 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:02:32.082 --rc lcov_branch_coverage=1 00:02:32.082 --rc lcov_function_coverage=1 00:02:32.082 --rc genhtml_branch_coverage=1 00:02:32.082 --rc genhtml_function_coverage=1 00:02:32.082 --rc genhtml_legend=1 00:02:32.082 --rc geninfo_all_blocks=1 00:02:32.082 ' 00:02:32.082 11:54:39 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:02:32.082 --rc lcov_branch_coverage=1 00:02:32.083 --rc lcov_function_coverage=1 00:02:32.083 --rc genhtml_branch_coverage=1 00:02:32.083 --rc genhtml_function_coverage=1 00:02:32.083 --rc genhtml_legend=1 00:02:32.083 --rc geninfo_all_blocks=1 00:02:32.083 --no-external' 00:02:32.083 11:54:39 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:02:32.083 --rc lcov_branch_coverage=1 00:02:32.083 --rc lcov_function_coverage=1 00:02:32.083 --rc genhtml_branch_coverage=1 00:02:32.083 --rc genhtml_function_coverage=1 00:02:32.083 --rc genhtml_legend=1 00:02:32.083 --rc geninfo_all_blocks=1 00:02:32.083 --no-external' 00:02:32.083 11:54:39 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:32.083 lcov: LCOV version 1.14 00:02:32.083 11:54:39 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/crypto-phy-autotest/spdk -o /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/cov_base.info 00:02:34.617 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:02:34.617 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:02:34.617 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:02:34.617 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:02:34.876 /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:02:34.876 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:52.987 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:52.987 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:52.988 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:52.988 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:52.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:52.989 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:52.989 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/crypto-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:56.273 11:55:03 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:02:56.274 11:55:03 -- common/autotest_common.sh@712 -- # xtrace_disable 00:02:56.274 11:55:03 -- common/autotest_common.sh@10 -- # set +x 00:02:56.274 11:55:03 -- spdk/autotest.sh@102 -- # rm -f 00:02:56.274 11:55:03 -- spdk/autotest.sh@105 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.559 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:02:59.559 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:02:59.559 0000:5e:00.0 (8086 0b60): Already using the nvme driver 00:02:59.559 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.559 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.559 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.559 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:59.817 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:00.118 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:00.118 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:00.118 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:00.118 11:55:07 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:03:00.118 11:55:07 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:00.118 11:55:07 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:00.118 11:55:07 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:00.118 11:55:07 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:00.118 11:55:07 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:00.118 11:55:07 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:00.118 11:55:07 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:00.118 11:55:07 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:00.118 11:55:07 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:03:00.118 11:55:07 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 00:03:00.118 11:55:07 -- spdk/autotest.sh@121 -- # grep -v p 00:03:00.118 11:55:07 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:00.118 11:55:07 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:03:00.118 11:55:07 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:03:00.118 11:55:07 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:00.118 11:55:07 -- scripts/common.sh@389 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:00.118 No valid GPT data, bailing 00:03:00.118 11:55:07 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:00.118 11:55:07 -- scripts/common.sh@393 -- # pt= 00:03:00.118 11:55:07 -- scripts/common.sh@394 -- # return 1 00:03:00.118 11:55:07 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:00.118 1+0 records in 00:03:00.118 1+0 records out 00:03:00.118 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00514769 s, 204 MB/s 00:03:00.118 11:55:07 -- spdk/autotest.sh@129 -- # sync 00:03:00.118 11:55:07 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:00.118 11:55:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:00.118 11:55:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:05.395 11:55:12 -- spdk/autotest.sh@135 -- # uname -s 00:03:05.395 11:55:12 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:03:05.395 11:55:12 -- spdk/autotest.sh@136 -- # run_test setup.sh /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.395 11:55:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:05.395 11:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:05.395 11:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:05.395 ************************************ 00:03:05.395 START TEST setup.sh 00:03:05.395 ************************************ 00:03:05.395 11:55:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/test-setup.sh 00:03:05.395 * Looking for test storage... 00:03:05.395 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:05.395 11:55:12 -- setup/test-setup.sh@10 -- # uname -s 00:03:05.395 11:55:12 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:05.395 11:55:12 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:05.395 11:55:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:05.395 11:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:05.395 11:55:12 -- common/autotest_common.sh@10 -- # set +x 00:03:05.395 ************************************ 00:03:05.395 START TEST acl 00:03:05.395 ************************************ 00:03:05.395 11:55:12 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/acl.sh 00:03:05.395 * Looking for test storage... 00:03:05.395 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:05.395 11:55:12 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:05.395 11:55:12 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:03:05.395 11:55:12 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:03:05.395 11:55:12 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:03:05.395 11:55:12 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:03:05.395 11:55:12 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:03:05.395 11:55:12 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:03:05.395 11:55:12 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:05.395 11:55:12 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:03:05.395 11:55:12 -- setup/acl.sh@12 -- # devs=() 00:03:05.395 11:55:12 -- setup/acl.sh@12 -- # declare -a devs 00:03:05.395 11:55:12 -- setup/acl.sh@13 -- # drivers=() 00:03:05.395 11:55:12 -- setup/acl.sh@13 -- # declare -A drivers 00:03:05.395 11:55:12 -- setup/acl.sh@51 -- # setup reset 00:03:05.395 11:55:12 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.395 11:55:12 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:09.586 11:55:16 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:09.586 11:55:16 -- setup/acl.sh@16 -- # local dev driver 00:03:09.586 11:55:16 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:09.586 11:55:16 -- setup/acl.sh@15 -- # setup output status 00:03:09.586 11:55:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.586 11:55:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:03:12.122 Hugepages 00:03:12.122 node hugesize free / total 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 00:03:12.122 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.122 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.122 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.122 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:12.381 11:55:19 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:85:05.5 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ - == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@19 -- # [[ 0000:ae:05.5 == *:*:*.* ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # [[ - == nvme ]] 00:03:12.381 11:55:19 -- setup/acl.sh@20 -- # continue 00:03:12.381 11:55:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:12.381 11:55:19 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:12.381 11:55:19 -- setup/acl.sh@54 -- # run_test denied denied 00:03:12.381 11:55:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:12.381 11:55:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:12.381 11:55:19 -- common/autotest_common.sh@10 -- # set +x 00:03:12.381 ************************************ 00:03:12.381 START TEST denied 00:03:12.381 ************************************ 00:03:12.381 11:55:19 -- common/autotest_common.sh@1104 -- # denied 00:03:12.381 11:55:19 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:12.381 11:55:19 -- setup/acl.sh@38 -- # setup output config 00:03:12.381 11:55:19 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:12.381 11:55:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.382 11:55:19 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:16.571 0000:5e:00.0 (8086 0b60): Skipping denied controller at 0000:5e:00.0 00:03:16.571 11:55:23 -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:16.571 11:55:23 -- setup/acl.sh@28 -- # local dev driver 00:03:16.571 11:55:23 -- setup/acl.sh@30 -- # for dev in "$@" 00:03:16.571 11:55:23 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:16.571 11:55:23 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:16.571 11:55:23 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:16.571 11:55:23 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:16.571 11:55:23 -- setup/acl.sh@41 -- # setup reset 00:03:16.571 11:55:23 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:16.571 11:55:23 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.756 00:03:20.756 real 0m8.118s 00:03:20.756 user 0m2.510s 00:03:20.756 sys 0m4.870s 00:03:20.756 11:55:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.756 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:03:20.756 ************************************ 00:03:20.756 END TEST denied 00:03:20.756 ************************************ 00:03:20.756 11:55:27 -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:20.756 11:55:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:20.756 11:55:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:20.756 11:55:27 -- common/autotest_common.sh@10 -- # set +x 00:03:20.756 ************************************ 00:03:20.756 START TEST allowed 00:03:20.756 ************************************ 00:03:20.756 11:55:27 -- common/autotest_common.sh@1104 -- # allowed 00:03:20.756 11:55:27 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:20.756 11:55:27 -- setup/acl.sh@45 -- # setup output config 00:03:20.756 11:55:27 -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:20.756 11:55:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.756 11:55:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:03:26.026 0000:5e:00.0 (8086 0b60): nvme -> vfio-pci 00:03:26.026 11:55:32 -- setup/acl.sh@47 -- # verify 00:03:26.026 11:55:32 -- setup/acl.sh@28 -- # local dev driver 00:03:26.026 11:55:32 -- setup/acl.sh@48 -- # setup reset 00:03:26.026 11:55:32 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:26.026 11:55:32 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:03:29.319 00:03:29.319 real 0m8.787s 00:03:29.319 user 0m2.576s 00:03:29.319 sys 0m5.000s 00:03:29.319 11:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.319 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:29.319 ************************************ 00:03:29.319 END TEST allowed 00:03:29.319 ************************************ 00:03:29.319 00:03:29.319 real 0m24.379s 00:03:29.319 user 0m7.764s 00:03:29.319 sys 0m14.874s 00:03:29.319 11:55:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.319 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:29.319 ************************************ 00:03:29.319 END TEST acl 00:03:29.319 ************************************ 00:03:29.319 11:55:36 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.319 11:55:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.319 11:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.319 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:29.319 ************************************ 00:03:29.319 START TEST hugepages 00:03:29.319 ************************************ 00:03:29.319 11:55:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/hugepages.sh 00:03:29.579 * Looking for test storage... 00:03:29.579 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:29.579 11:55:36 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:29.579 11:55:36 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:29.579 11:55:36 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:29.579 11:55:36 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:29.579 11:55:36 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:29.579 11:55:36 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:29.579 11:55:36 -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:29.579 11:55:36 -- setup/common.sh@18 -- # local node= 00:03:29.579 11:55:36 -- setup/common.sh@19 -- # local var val 00:03:29.579 11:55:36 -- setup/common.sh@20 -- # local mem_f mem 00:03:29.579 11:55:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.579 11:55:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.579 11:55:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.579 11:55:36 -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.579 11:55:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 71148680 kB' 'MemAvailable: 74908596 kB' 'Buffers: 12460 kB' 'Cached: 14702944 kB' 'SwapCached: 0 kB' 'Active: 11552716 kB' 'Inactive: 3646404 kB' 'Active(anon): 11080008 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 487060 kB' 'Mapped: 187120 kB' 'Shmem: 10596292 kB' 'KReclaimable: 476788 kB' 'Slab: 846920 kB' 'SReclaimable: 476788 kB' 'SUnreclaim: 370132 kB' 'KernelStack: 15680 kB' 'PageTables: 8136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52438228 kB' 'Committed_AS: 12491264 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200988 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.579 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.579 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # continue 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # IFS=': ' 00:03:29.580 11:55:36 -- setup/common.sh@31 -- # read -r var val _ 00:03:29.580 11:55:36 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:29.580 11:55:36 -- setup/common.sh@33 -- # echo 2048 00:03:29.581 11:55:36 -- setup/common.sh@33 -- # return 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:29.581 11:55:36 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:29.581 11:55:36 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:29.581 11:55:36 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:29.581 11:55:36 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:29.581 11:55:36 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:29.581 11:55:36 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:29.581 11:55:36 -- setup/hugepages.sh@207 -- # get_nodes 00:03:29.581 11:55:36 -- setup/hugepages.sh@27 -- # local node 00:03:29.581 11:55:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.581 11:55:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.581 11:55:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.581 11:55:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:29.581 11:55:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.581 11:55:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.581 11:55:36 -- setup/hugepages.sh@208 -- # clear_hp 00:03:29.581 11:55:36 -- setup/hugepages.sh@37 -- # local node hp 00:03:29.581 11:55:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.581 11:55:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.581 11:55:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.581 11:55:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:29.581 11:55:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.581 11:55:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:29.581 11:55:36 -- setup/hugepages.sh@41 -- # echo 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:29.581 11:55:36 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:29.581 11:55:36 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:29.581 11:55:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:29.581 11:55:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:29.581 11:55:36 -- common/autotest_common.sh@10 -- # set +x 00:03:29.581 ************************************ 00:03:29.581 START TEST default_setup 00:03:29.581 ************************************ 00:03:29.581 11:55:36 -- common/autotest_common.sh@1104 -- # default_setup 00:03:29.581 11:55:36 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:29.581 11:55:36 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:29.581 11:55:36 -- setup/hugepages.sh@51 -- # shift 00:03:29.581 11:55:36 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:29.581 11:55:36 -- setup/hugepages.sh@52 -- # local node_ids 00:03:29.581 11:55:36 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.581 11:55:36 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:29.581 11:55:36 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:29.581 11:55:36 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.581 11:55:36 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:29.581 11:55:36 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.581 11:55:36 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.581 11:55:36 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.581 11:55:36 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:29.581 11:55:36 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:29.581 11:55:36 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:29.581 11:55:36 -- setup/hugepages.sh@73 -- # return 0 00:03:29.581 11:55:36 -- setup/hugepages.sh@137 -- # setup output 00:03:29.581 11:55:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.581 11:55:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:32.933 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:32.933 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:32.933 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:32.933 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:34.312 0000:5e:00.0 (8086 0b60): nvme -> vfio-pci 00:03:34.312 11:55:41 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:34.312 11:55:41 -- setup/hugepages.sh@89 -- # local node 00:03:34.312 11:55:41 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.312 11:55:41 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.312 11:55:41 -- setup/hugepages.sh@92 -- # local surp 00:03:34.312 11:55:41 -- setup/hugepages.sh@93 -- # local resv 00:03:34.312 11:55:41 -- setup/hugepages.sh@94 -- # local anon 00:03:34.312 11:55:41 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.312 11:55:41 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.312 11:55:41 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.312 11:55:41 -- setup/common.sh@18 -- # local node= 00:03:34.312 11:55:41 -- setup/common.sh@19 -- # local var val 00:03:34.312 11:55:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.312 11:55:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.312 11:55:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.312 11:55:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.312 11:55:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.312 11:55:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73313044 kB' 'MemAvailable: 77072832 kB' 'Buffers: 12460 kB' 'Cached: 14703060 kB' 'SwapCached: 0 kB' 'Active: 11571660 kB' 'Inactive: 3646404 kB' 'Active(anon): 11098952 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505384 kB' 'Mapped: 187132 kB' 'Shmem: 10596408 kB' 'KReclaimable: 476660 kB' 'Slab: 845176 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368516 kB' 'KernelStack: 16000 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12518300 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201244 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.312 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.312 11:55:41 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.313 11:55:41 -- setup/common.sh@33 -- # echo 0 00:03:34.313 11:55:41 -- setup/common.sh@33 -- # return 0 00:03:34.313 11:55:41 -- setup/hugepages.sh@97 -- # anon=0 00:03:34.313 11:55:41 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:34.313 11:55:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.313 11:55:41 -- setup/common.sh@18 -- # local node= 00:03:34.313 11:55:41 -- setup/common.sh@19 -- # local var val 00:03:34.313 11:55:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.313 11:55:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.313 11:55:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.313 11:55:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.313 11:55:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.313 11:55:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73312416 kB' 'MemAvailable: 77072204 kB' 'Buffers: 12460 kB' 'Cached: 14703060 kB' 'SwapCached: 0 kB' 'Active: 11572484 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099776 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506192 kB' 'Mapped: 187204 kB' 'Shmem: 10596408 kB' 'KReclaimable: 476660 kB' 'Slab: 845248 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368588 kB' 'KernelStack: 15872 kB' 'PageTables: 8452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12518312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201196 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.313 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.313 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.314 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.314 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.315 11:55:41 -- setup/common.sh@33 -- # echo 0 00:03:34.315 11:55:41 -- setup/common.sh@33 -- # return 0 00:03:34.315 11:55:41 -- setup/hugepages.sh@99 -- # surp=0 00:03:34.315 11:55:41 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:34.315 11:55:41 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:34.315 11:55:41 -- setup/common.sh@18 -- # local node= 00:03:34.315 11:55:41 -- setup/common.sh@19 -- # local var val 00:03:34.315 11:55:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.315 11:55:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.315 11:55:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.315 11:55:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.315 11:55:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.315 11:55:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73311576 kB' 'MemAvailable: 77071364 kB' 'Buffers: 12460 kB' 'Cached: 14703072 kB' 'SwapCached: 0 kB' 'Active: 11571324 kB' 'Inactive: 3646404 kB' 'Active(anon): 11098616 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505460 kB' 'Mapped: 187128 kB' 'Shmem: 10596420 kB' 'KReclaimable: 476660 kB' 'Slab: 845280 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368620 kB' 'KernelStack: 15968 kB' 'PageTables: 8960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12518324 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201196 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.315 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.315 11:55:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:34.316 11:55:41 -- setup/common.sh@33 -- # echo 0 00:03:34.316 11:55:41 -- setup/common.sh@33 -- # return 0 00:03:34.316 11:55:41 -- setup/hugepages.sh@100 -- # resv=0 00:03:34.316 11:55:41 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:34.316 nr_hugepages=1024 00:03:34.316 11:55:41 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:34.316 resv_hugepages=0 00:03:34.316 11:55:41 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:34.316 surplus_hugepages=0 00:03:34.316 11:55:41 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:34.316 anon_hugepages=0 00:03:34.316 11:55:41 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.316 11:55:41 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:34.316 11:55:41 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:34.316 11:55:41 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:34.316 11:55:41 -- setup/common.sh@18 -- # local node= 00:03:34.316 11:55:41 -- setup/common.sh@19 -- # local var val 00:03:34.316 11:55:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.316 11:55:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.316 11:55:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.316 11:55:41 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.316 11:55:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.316 11:55:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73310132 kB' 'MemAvailable: 77069920 kB' 'Buffers: 12460 kB' 'Cached: 14703088 kB' 'SwapCached: 0 kB' 'Active: 11571356 kB' 'Inactive: 3646404 kB' 'Active(anon): 11098648 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505504 kB' 'Mapped: 187128 kB' 'Shmem: 10596436 kB' 'KReclaimable: 476660 kB' 'Slab: 845280 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368620 kB' 'KernelStack: 15856 kB' 'PageTables: 8880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12516948 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201180 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.316 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.316 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.317 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.317 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:34.318 11:55:41 -- setup/common.sh@33 -- # echo 1024 00:03:34.318 11:55:41 -- setup/common.sh@33 -- # return 0 00:03:34.318 11:55:41 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:34.318 11:55:41 -- setup/hugepages.sh@112 -- # get_nodes 00:03:34.318 11:55:41 -- setup/hugepages.sh@27 -- # local node 00:03:34.318 11:55:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.318 11:55:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:34.318 11:55:41 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:34.318 11:55:41 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:34.318 11:55:41 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:34.318 11:55:41 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:34.318 11:55:41 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:34.318 11:55:41 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:34.318 11:55:41 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:34.318 11:55:41 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:34.318 11:55:41 -- setup/common.sh@18 -- # local node=0 00:03:34.318 11:55:41 -- setup/common.sh@19 -- # local var val 00:03:34.318 11:55:41 -- setup/common.sh@20 -- # local mem_f mem 00:03:34.318 11:55:41 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.318 11:55:41 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:34.318 11:55:41 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:34.318 11:55:41 -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.318 11:55:41 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 31023032 kB' 'MemUsed: 17046900 kB' 'SwapCached: 0 kB' 'Active: 10399432 kB' 'Inactive: 3496976 kB' 'Active(anon): 10150356 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517672 kB' 'Mapped: 103532 kB' 'AnonPages: 381936 kB' 'Shmem: 9771620 kB' 'KernelStack: 9272 kB' 'PageTables: 5384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574236 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.318 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.318 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.319 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.319 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.578 11:55:41 -- setup/common.sh@32 -- # continue 00:03:34.578 11:55:41 -- setup/common.sh@31 -- # IFS=': ' 00:03:34.578 11:55:41 -- setup/common.sh@31 -- # read -r var val _ 00:03:34.578 11:55:41 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:34.578 11:55:41 -- setup/common.sh@33 -- # echo 0 00:03:34.578 11:55:41 -- setup/common.sh@33 -- # return 0 00:03:34.578 11:55:41 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:34.578 11:55:41 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:34.578 11:55:41 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:34.578 11:55:41 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:34.578 11:55:41 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:34.578 node0=1024 expecting 1024 00:03:34.578 11:55:41 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:34.578 00:03:34.578 real 0m4.851s 00:03:34.578 user 0m1.309s 00:03:34.578 sys 0m2.320s 00:03:34.578 11:55:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:34.578 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:03:34.578 ************************************ 00:03:34.578 END TEST default_setup 00:03:34.578 ************************************ 00:03:34.578 11:55:41 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:34.578 11:55:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:34.578 11:55:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:34.578 11:55:41 -- common/autotest_common.sh@10 -- # set +x 00:03:34.578 ************************************ 00:03:34.578 START TEST per_node_1G_alloc 00:03:34.578 ************************************ 00:03:34.578 11:55:41 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:03:34.578 11:55:41 -- setup/hugepages.sh@143 -- # local IFS=, 00:03:34.578 11:55:41 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:34.578 11:55:41 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:34.578 11:55:41 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:34.578 11:55:41 -- setup/hugepages.sh@51 -- # shift 00:03:34.578 11:55:41 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:34.578 11:55:41 -- setup/hugepages.sh@52 -- # local node_ids 00:03:34.578 11:55:41 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:34.578 11:55:41 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:34.578 11:55:41 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:34.578 11:55:41 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:34.578 11:55:41 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:34.578 11:55:41 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:34.578 11:55:41 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:34.578 11:55:41 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:34.578 11:55:41 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:34.578 11:55:41 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:34.578 11:55:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.578 11:55:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.578 11:55:41 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:34.578 11:55:41 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:34.578 11:55:41 -- setup/hugepages.sh@73 -- # return 0 00:03:34.578 11:55:41 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:34.578 11:55:41 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:34.578 11:55:41 -- setup/hugepages.sh@146 -- # setup output 00:03:34.578 11:55:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.578 11:55:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:37.867 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:37.867 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:37.867 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:37.867 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.867 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.867 11:55:45 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:37.867 11:55:45 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:37.867 11:55:45 -- setup/hugepages.sh@89 -- # local node 00:03:37.867 11:55:45 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.867 11:55:45 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.867 11:55:45 -- setup/hugepages.sh@92 -- # local surp 00:03:37.867 11:55:45 -- setup/hugepages.sh@93 -- # local resv 00:03:37.867 11:55:45 -- setup/hugepages.sh@94 -- # local anon 00:03:37.867 11:55:45 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.867 11:55:45 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.867 11:55:45 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.867 11:55:45 -- setup/common.sh@18 -- # local node= 00:03:37.867 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:37.867 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.867 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.867 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.867 11:55:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.867 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.867 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.867 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73294528 kB' 'MemAvailable: 77054316 kB' 'Buffers: 12460 kB' 'Cached: 14703160 kB' 'SwapCached: 0 kB' 'Active: 11568752 kB' 'Inactive: 3646404 kB' 'Active(anon): 11096044 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502308 kB' 'Mapped: 186356 kB' 'Shmem: 10596508 kB' 'KReclaimable: 476660 kB' 'Slab: 845828 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369168 kB' 'KernelStack: 15744 kB' 'PageTables: 8484 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12500424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201148 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.868 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.868 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.869 11:55:45 -- setup/common.sh@33 -- # echo 0 00:03:37.869 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:37.869 11:55:45 -- setup/hugepages.sh@97 -- # anon=0 00:03:37.869 11:55:45 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.869 11:55:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.869 11:55:45 -- setup/common.sh@18 -- # local node= 00:03:37.869 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:37.869 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:37.869 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.869 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.869 11:55:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.869 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.869 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73296788 kB' 'MemAvailable: 77056576 kB' 'Buffers: 12460 kB' 'Cached: 14703160 kB' 'SwapCached: 0 kB' 'Active: 11568860 kB' 'Inactive: 3646404 kB' 'Active(anon): 11096152 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502432 kB' 'Mapped: 186360 kB' 'Shmem: 10596508 kB' 'KReclaimable: 476660 kB' 'Slab: 845776 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369116 kB' 'KernelStack: 15680 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12500436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201068 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.869 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:55:45 -- setup/common.sh@32 -- # continue 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.132 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.132 11:55:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.133 11:55:45 -- setup/common.sh@33 -- # echo 0 00:03:38.133 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:38.133 11:55:45 -- setup/hugepages.sh@99 -- # surp=0 00:03:38.133 11:55:45 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:38.133 11:55:45 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:38.133 11:55:45 -- setup/common.sh@18 -- # local node= 00:03:38.133 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:38.133 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.133 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.133 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.133 11:55:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.133 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.133 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73296844 kB' 'MemAvailable: 77056632 kB' 'Buffers: 12460 kB' 'Cached: 14703176 kB' 'SwapCached: 0 kB' 'Active: 11567796 kB' 'Inactive: 3646404 kB' 'Active(anon): 11095088 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501364 kB' 'Mapped: 186332 kB' 'Shmem: 10596524 kB' 'KReclaimable: 476660 kB' 'Slab: 845776 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369116 kB' 'KernelStack: 15664 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12500452 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201068 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.133 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.133 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:38.134 11:55:45 -- setup/common.sh@33 -- # echo 0 00:03:38.134 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:38.134 11:55:45 -- setup/hugepages.sh@100 -- # resv=0 00:03:38.134 11:55:45 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:38.134 nr_hugepages=1024 00:03:38.134 11:55:45 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:38.134 resv_hugepages=0 00:03:38.134 11:55:45 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:38.134 surplus_hugepages=0 00:03:38.134 11:55:45 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:38.134 anon_hugepages=0 00:03:38.134 11:55:45 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.134 11:55:45 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:38.134 11:55:45 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:38.134 11:55:45 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:38.134 11:55:45 -- setup/common.sh@18 -- # local node= 00:03:38.134 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:38.134 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.134 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.134 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:38.134 11:55:45 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:38.134 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.134 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73298436 kB' 'MemAvailable: 77058224 kB' 'Buffers: 12460 kB' 'Cached: 14703188 kB' 'SwapCached: 0 kB' 'Active: 11567460 kB' 'Inactive: 3646404 kB' 'Active(anon): 11094752 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501504 kB' 'Mapped: 186256 kB' 'Shmem: 10596536 kB' 'KReclaimable: 476660 kB' 'Slab: 845788 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369128 kB' 'KernelStack: 15680 kB' 'PageTables: 8252 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12500464 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201068 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.134 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.134 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.135 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.135 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:38.136 11:55:45 -- setup/common.sh@33 -- # echo 1024 00:03:38.136 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:38.136 11:55:45 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:38.136 11:55:45 -- setup/hugepages.sh@112 -- # get_nodes 00:03:38.136 11:55:45 -- setup/hugepages.sh@27 -- # local node 00:03:38.136 11:55:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.136 11:55:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.136 11:55:45 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:38.136 11:55:45 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:38.136 11:55:45 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:38.136 11:55:45 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:38.136 11:55:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.136 11:55:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.136 11:55:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:38.136 11:55:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.136 11:55:45 -- setup/common.sh@18 -- # local node=0 00:03:38.136 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:38.136 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.136 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.136 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:38.136 11:55:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:38.136 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.136 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 32064708 kB' 'MemUsed: 16005224 kB' 'SwapCached: 0 kB' 'Active: 10398756 kB' 'Inactive: 3496976 kB' 'Active(anon): 10149680 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517780 kB' 'Mapped: 102732 kB' 'AnonPages: 381092 kB' 'Shmem: 9771728 kB' 'KernelStack: 9208 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574648 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228780 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.136 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.136 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@33 -- # echo 0 00:03:38.137 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:38.137 11:55:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.137 11:55:45 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:38.137 11:55:45 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:38.137 11:55:45 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:38.137 11:55:45 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:38.137 11:55:45 -- setup/common.sh@18 -- # local node=1 00:03:38.137 11:55:45 -- setup/common.sh@19 -- # local var val 00:03:38.137 11:55:45 -- setup/common.sh@20 -- # local mem_f mem 00:03:38.137 11:55:45 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:38.137 11:55:45 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:38.137 11:55:45 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:38.137 11:55:45 -- setup/common.sh@28 -- # mapfile -t mem 00:03:38.137 11:55:45 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 41233980 kB' 'MemUsed: 2989640 kB' 'SwapCached: 0 kB' 'Active: 1168360 kB' 'Inactive: 149428 kB' 'Active(anon): 944728 kB' 'Inactive(anon): 0 kB' 'Active(file): 223632 kB' 'Inactive(file): 149428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1197896 kB' 'Mapped: 83524 kB' 'AnonPages: 120024 kB' 'Shmem: 824836 kB' 'KernelStack: 6456 kB' 'PageTables: 3068 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130792 kB' 'Slab: 271140 kB' 'SReclaimable: 130792 kB' 'SUnreclaim: 140348 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.137 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.137 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # continue 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # IFS=': ' 00:03:38.138 11:55:45 -- setup/common.sh@31 -- # read -r var val _ 00:03:38.138 11:55:45 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:38.138 11:55:45 -- setup/common.sh@33 -- # echo 0 00:03:38.138 11:55:45 -- setup/common.sh@33 -- # return 0 00:03:38.138 11:55:45 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.138 11:55:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.138 11:55:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.138 11:55:45 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:38.138 node0=512 expecting 512 00:03:38.138 11:55:45 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:38.138 11:55:45 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:38.138 11:55:45 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:38.138 11:55:45 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:38.138 node1=512 expecting 512 00:03:38.138 11:55:45 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:38.138 00:03:38.138 real 0m3.668s 00:03:38.138 user 0m1.400s 00:03:38.138 sys 0m2.373s 00:03:38.138 11:55:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:38.138 11:55:45 -- common/autotest_common.sh@10 -- # set +x 00:03:38.138 ************************************ 00:03:38.138 END TEST per_node_1G_alloc 00:03:38.138 ************************************ 00:03:38.138 11:55:45 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:38.138 11:55:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:38.138 11:55:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:38.138 11:55:45 -- common/autotest_common.sh@10 -- # set +x 00:03:38.138 ************************************ 00:03:38.138 START TEST even_2G_alloc 00:03:38.138 ************************************ 00:03:38.138 11:55:45 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:03:38.138 11:55:45 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:38.138 11:55:45 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:38.138 11:55:45 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:38.138 11:55:45 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:38.138 11:55:45 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:38.138 11:55:45 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:38.138 11:55:45 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:38.138 11:55:45 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:38.138 11:55:45 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:38.138 11:55:45 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:38.138 11:55:45 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.138 11:55:45 -- setup/hugepages.sh@83 -- # : 512 00:03:38.138 11:55:45 -- setup/hugepages.sh@84 -- # : 1 00:03:38.138 11:55:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:38.138 11:55:45 -- setup/hugepages.sh@83 -- # : 0 00:03:38.138 11:55:45 -- setup/hugepages.sh@84 -- # : 0 00:03:38.138 11:55:45 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:38.138 11:55:45 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:38.138 11:55:45 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:38.138 11:55:45 -- setup/hugepages.sh@153 -- # setup output 00:03:38.138 11:55:45 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:38.138 11:55:45 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:42.332 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:42.333 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:42.333 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:42.333 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:42.333 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:42.333 11:55:48 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:42.333 11:55:48 -- setup/hugepages.sh@89 -- # local node 00:03:42.333 11:55:48 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:42.333 11:55:48 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:42.333 11:55:48 -- setup/hugepages.sh@92 -- # local surp 00:03:42.333 11:55:48 -- setup/hugepages.sh@93 -- # local resv 00:03:42.333 11:55:48 -- setup/hugepages.sh@94 -- # local anon 00:03:42.333 11:55:49 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:42.333 11:55:49 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:42.333 11:55:49 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:42.333 11:55:49 -- setup/common.sh@18 -- # local node= 00:03:42.333 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.333 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.333 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.333 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.333 11:55:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.333 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.333 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73316104 kB' 'MemAvailable: 77075892 kB' 'Buffers: 12460 kB' 'Cached: 14703280 kB' 'SwapCached: 0 kB' 'Active: 11568052 kB' 'Inactive: 3646404 kB' 'Active(anon): 11095344 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501544 kB' 'Mapped: 186316 kB' 'Shmem: 10596628 kB' 'KReclaimable: 476660 kB' 'Slab: 845724 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369064 kB' 'KernelStack: 15664 kB' 'PageTables: 8240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12501068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201052 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.333 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.333 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:42.334 11:55:49 -- setup/common.sh@33 -- # echo 0 00:03:42.334 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.334 11:55:49 -- setup/hugepages.sh@97 -- # anon=0 00:03:42.334 11:55:49 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:42.334 11:55:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.334 11:55:49 -- setup/common.sh@18 -- # local node= 00:03:42.334 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.334 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.334 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.334 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.334 11:55:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.334 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.334 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73317264 kB' 'MemAvailable: 77077052 kB' 'Buffers: 12460 kB' 'Cached: 14703284 kB' 'SwapCached: 0 kB' 'Active: 11568484 kB' 'Inactive: 3646404 kB' 'Active(anon): 11095776 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501952 kB' 'Mapped: 186352 kB' 'Shmem: 10596632 kB' 'KReclaimable: 476660 kB' 'Slab: 845788 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369128 kB' 'KernelStack: 15664 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12501076 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201036 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.334 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.334 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.335 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.335 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.336 11:55:49 -- setup/common.sh@33 -- # echo 0 00:03:42.336 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.336 11:55:49 -- setup/hugepages.sh@99 -- # surp=0 00:03:42.336 11:55:49 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:42.336 11:55:49 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:42.336 11:55:49 -- setup/common.sh@18 -- # local node= 00:03:42.336 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.336 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.336 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.336 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.336 11:55:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.336 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.336 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73317840 kB' 'MemAvailable: 77077628 kB' 'Buffers: 12460 kB' 'Cached: 14703284 kB' 'SwapCached: 0 kB' 'Active: 11568096 kB' 'Inactive: 3646404 kB' 'Active(anon): 11095388 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502000 kB' 'Mapped: 186268 kB' 'Shmem: 10596632 kB' 'KReclaimable: 476660 kB' 'Slab: 845780 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369120 kB' 'KernelStack: 15696 kB' 'PageTables: 8304 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12501092 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201052 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.336 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.336 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:42.337 11:55:49 -- setup/common.sh@33 -- # echo 0 00:03:42.337 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.337 11:55:49 -- setup/hugepages.sh@100 -- # resv=0 00:03:42.337 11:55:49 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:42.337 nr_hugepages=1024 00:03:42.337 11:55:49 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:42.337 resv_hugepages=0 00:03:42.337 11:55:49 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:42.337 surplus_hugepages=0 00:03:42.337 11:55:49 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:42.337 anon_hugepages=0 00:03:42.337 11:55:49 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.337 11:55:49 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:42.337 11:55:49 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:42.337 11:55:49 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:42.337 11:55:49 -- setup/common.sh@18 -- # local node= 00:03:42.337 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.337 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.337 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.337 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:42.337 11:55:49 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:42.337 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.337 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73317772 kB' 'MemAvailable: 77077560 kB' 'Buffers: 12460 kB' 'Cached: 14703316 kB' 'SwapCached: 0 kB' 'Active: 11568012 kB' 'Inactive: 3646404 kB' 'Active(anon): 11095304 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501868 kB' 'Mapped: 186268 kB' 'Shmem: 10596664 kB' 'KReclaimable: 476660 kB' 'Slab: 845780 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369120 kB' 'KernelStack: 15664 kB' 'PageTables: 8204 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12501108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201052 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.337 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.337 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.338 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.338 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:42.339 11:55:49 -- setup/common.sh@33 -- # echo 1024 00:03:42.339 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.339 11:55:49 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:42.339 11:55:49 -- setup/hugepages.sh@112 -- # get_nodes 00:03:42.339 11:55:49 -- setup/hugepages.sh@27 -- # local node 00:03:42.339 11:55:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.339 11:55:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.339 11:55:49 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:42.339 11:55:49 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:42.339 11:55:49 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:42.339 11:55:49 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:42.339 11:55:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.339 11:55:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.339 11:55:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:42.339 11:55:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.339 11:55:49 -- setup/common.sh@18 -- # local node=0 00:03:42.339 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.339 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.339 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.339 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:42.339 11:55:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:42.339 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.339 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 32080476 kB' 'MemUsed: 15989456 kB' 'SwapCached: 0 kB' 'Active: 10399204 kB' 'Inactive: 3496976 kB' 'Active(anon): 10150128 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517828 kB' 'Mapped: 102748 kB' 'AnonPages: 381484 kB' 'Shmem: 9771776 kB' 'KernelStack: 9208 kB' 'PageTables: 5180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574512 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228644 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.339 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.339 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@33 -- # echo 0 00:03:42.340 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.340 11:55:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.340 11:55:49 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:42.340 11:55:49 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:42.340 11:55:49 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:42.340 11:55:49 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:42.340 11:55:49 -- setup/common.sh@18 -- # local node=1 00:03:42.340 11:55:49 -- setup/common.sh@19 -- # local var val 00:03:42.340 11:55:49 -- setup/common.sh@20 -- # local mem_f mem 00:03:42.340 11:55:49 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:42.340 11:55:49 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:42.340 11:55:49 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:42.340 11:55:49 -- setup/common.sh@28 -- # mapfile -t mem 00:03:42.340 11:55:49 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 41237916 kB' 'MemUsed: 2985704 kB' 'SwapCached: 0 kB' 'Active: 1168956 kB' 'Inactive: 149428 kB' 'Active(anon): 945324 kB' 'Inactive(anon): 0 kB' 'Active(file): 223632 kB' 'Inactive(file): 149428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1197972 kB' 'Mapped: 83520 kB' 'AnonPages: 120488 kB' 'Shmem: 824912 kB' 'KernelStack: 6472 kB' 'PageTables: 3076 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130792 kB' 'Slab: 271268 kB' 'SReclaimable: 130792 kB' 'SUnreclaim: 140476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.340 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.340 11:55:49 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # continue 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # IFS=': ' 00:03:42.341 11:55:49 -- setup/common.sh@31 -- # read -r var val _ 00:03:42.341 11:55:49 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:42.341 11:55:49 -- setup/common.sh@33 -- # echo 0 00:03:42.341 11:55:49 -- setup/common.sh@33 -- # return 0 00:03:42.341 11:55:49 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:42.341 11:55:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.341 11:55:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.341 11:55:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.341 11:55:49 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:42.341 node0=512 expecting 512 00:03:42.341 11:55:49 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:42.341 11:55:49 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:42.341 11:55:49 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:42.341 11:55:49 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:42.341 node1=512 expecting 512 00:03:42.341 11:55:49 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:42.341 00:03:42.341 real 0m3.793s 00:03:42.341 user 0m1.394s 00:03:42.341 sys 0m2.464s 00:03:42.341 11:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:42.341 11:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.341 ************************************ 00:03:42.341 END TEST even_2G_alloc 00:03:42.341 ************************************ 00:03:42.341 11:55:49 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:42.341 11:55:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:42.341 11:55:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:42.341 11:55:49 -- common/autotest_common.sh@10 -- # set +x 00:03:42.341 ************************************ 00:03:42.341 START TEST odd_alloc 00:03:42.342 ************************************ 00:03:42.342 11:55:49 -- common/autotest_common.sh@1104 -- # odd_alloc 00:03:42.342 11:55:49 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:42.342 11:55:49 -- setup/hugepages.sh@49 -- # local size=2098176 00:03:42.342 11:55:49 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:42.342 11:55:49 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:42.342 11:55:49 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:42.342 11:55:49 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:42.342 11:55:49 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:42.342 11:55:49 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:42.342 11:55:49 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:42.342 11:55:49 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:42.342 11:55:49 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:42.342 11:55:49 -- setup/hugepages.sh@83 -- # : 513 00:03:42.342 11:55:49 -- setup/hugepages.sh@84 -- # : 1 00:03:42.342 11:55:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:42.342 11:55:49 -- setup/hugepages.sh@83 -- # : 0 00:03:42.342 11:55:49 -- setup/hugepages.sh@84 -- # : 0 00:03:42.342 11:55:49 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:42.342 11:55:49 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:42.342 11:55:49 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:42.342 11:55:49 -- setup/hugepages.sh@160 -- # setup output 00:03:42.342 11:55:49 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.342 11:55:49 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:45.634 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:45.634 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:45.634 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:45.634 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:45.634 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:45.634 11:55:52 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:45.634 11:55:52 -- setup/hugepages.sh@89 -- # local node 00:03:45.634 11:55:52 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:45.634 11:55:52 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:45.634 11:55:52 -- setup/hugepages.sh@92 -- # local surp 00:03:45.634 11:55:52 -- setup/hugepages.sh@93 -- # local resv 00:03:45.634 11:55:52 -- setup/hugepages.sh@94 -- # local anon 00:03:45.634 11:55:52 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:45.634 11:55:52 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:45.634 11:55:52 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:45.634 11:55:52 -- setup/common.sh@18 -- # local node= 00:03:45.634 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.634 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.634 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.634 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.634 11:55:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.634 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.634 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.634 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.634 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73301160 kB' 'MemAvailable: 77060948 kB' 'Buffers: 12460 kB' 'Cached: 14703404 kB' 'SwapCached: 0 kB' 'Active: 11570200 kB' 'Inactive: 3646404 kB' 'Active(anon): 11097492 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504288 kB' 'Mapped: 186264 kB' 'Shmem: 10596752 kB' 'KReclaimable: 476660 kB' 'Slab: 845464 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368804 kB' 'KernelStack: 15696 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485780 kB' 'Committed_AS: 12501752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201036 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.635 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.635 11:55:52 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:45.636 11:55:52 -- setup/common.sh@33 -- # echo 0 00:03:45.636 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.636 11:55:52 -- setup/hugepages.sh@97 -- # anon=0 00:03:45.636 11:55:52 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:45.636 11:55:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.636 11:55:52 -- setup/common.sh@18 -- # local node= 00:03:45.636 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.636 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.636 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.636 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.636 11:55:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.636 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.636 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.636 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73301260 kB' 'MemAvailable: 77061048 kB' 'Buffers: 12460 kB' 'Cached: 14703404 kB' 'SwapCached: 0 kB' 'Active: 11570756 kB' 'Inactive: 3646404 kB' 'Active(anon): 11098048 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504992 kB' 'Mapped: 186264 kB' 'Shmem: 10596752 kB' 'KReclaimable: 476660 kB' 'Slab: 845464 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368804 kB' 'KernelStack: 15744 kB' 'PageTables: 8500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485780 kB' 'Committed_AS: 12501764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200972 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.636 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.636 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.637 11:55:52 -- setup/common.sh@33 -- # echo 0 00:03:45.637 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.637 11:55:52 -- setup/hugepages.sh@99 -- # surp=0 00:03:45.637 11:55:52 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:45.637 11:55:52 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:45.637 11:55:52 -- setup/common.sh@18 -- # local node= 00:03:45.637 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.637 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.637 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.637 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.637 11:55:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.637 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.637 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73302164 kB' 'MemAvailable: 77061952 kB' 'Buffers: 12460 kB' 'Cached: 14703416 kB' 'SwapCached: 0 kB' 'Active: 11568892 kB' 'Inactive: 3646404 kB' 'Active(anon): 11096184 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502904 kB' 'Mapped: 186264 kB' 'Shmem: 10596764 kB' 'KReclaimable: 476660 kB' 'Slab: 845448 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368788 kB' 'KernelStack: 15664 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485780 kB' 'Committed_AS: 12501776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200956 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.637 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.637 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.638 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.638 11:55:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:45.639 11:55:52 -- setup/common.sh@33 -- # echo 0 00:03:45.639 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.639 11:55:52 -- setup/hugepages.sh@100 -- # resv=0 00:03:45.639 11:55:52 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:45.639 nr_hugepages=1025 00:03:45.639 11:55:52 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:45.639 resv_hugepages=0 00:03:45.639 11:55:52 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:45.639 surplus_hugepages=0 00:03:45.639 11:55:52 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:45.639 anon_hugepages=0 00:03:45.639 11:55:52 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.639 11:55:52 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:45.639 11:55:52 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:45.639 11:55:52 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:45.639 11:55:52 -- setup/common.sh@18 -- # local node= 00:03:45.639 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.639 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.639 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.639 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:45.639 11:55:52 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:45.639 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.639 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73302164 kB' 'MemAvailable: 77061952 kB' 'Buffers: 12460 kB' 'Cached: 14703444 kB' 'SwapCached: 0 kB' 'Active: 11568948 kB' 'Inactive: 3646404 kB' 'Active(anon): 11096240 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 502928 kB' 'Mapped: 186264 kB' 'Shmem: 10596792 kB' 'KReclaimable: 476660 kB' 'Slab: 845448 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368788 kB' 'KernelStack: 15664 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53485780 kB' 'Committed_AS: 12501792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200956 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.639 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.639 11:55:52 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.640 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:45.640 11:55:52 -- setup/common.sh@33 -- # echo 1025 00:03:45.640 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.640 11:55:52 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:45.640 11:55:52 -- setup/hugepages.sh@112 -- # get_nodes 00:03:45.640 11:55:52 -- setup/hugepages.sh@27 -- # local node 00:03:45.640 11:55:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.640 11:55:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:45.640 11:55:52 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:45.640 11:55:52 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:45.640 11:55:52 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:45.640 11:55:52 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:45.640 11:55:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.640 11:55:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.640 11:55:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:45.640 11:55:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.640 11:55:52 -- setup/common.sh@18 -- # local node=0 00:03:45.640 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.640 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.640 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.640 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:45.640 11:55:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:45.640 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.640 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.640 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 32066996 kB' 'MemUsed: 16002936 kB' 'SwapCached: 0 kB' 'Active: 10399396 kB' 'Inactive: 3496976 kB' 'Active(anon): 10150320 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517828 kB' 'Mapped: 102744 kB' 'AnonPages: 381904 kB' 'Shmem: 9771776 kB' 'KernelStack: 9208 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574344 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228476 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.641 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.641 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@33 -- # echo 0 00:03:45.902 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.902 11:55:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.902 11:55:52 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:45.902 11:55:52 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:45.902 11:55:52 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:45.902 11:55:52 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:45.902 11:55:52 -- setup/common.sh@18 -- # local node=1 00:03:45.902 11:55:52 -- setup/common.sh@19 -- # local var val 00:03:45.902 11:55:52 -- setup/common.sh@20 -- # local mem_f mem 00:03:45.902 11:55:52 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:45.902 11:55:52 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:45.902 11:55:52 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:45.902 11:55:52 -- setup/common.sh@28 -- # mapfile -t mem 00:03:45.902 11:55:52 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 41235788 kB' 'MemUsed: 2987832 kB' 'SwapCached: 0 kB' 'Active: 1170260 kB' 'Inactive: 149428 kB' 'Active(anon): 946628 kB' 'Inactive(anon): 0 kB' 'Active(file): 223632 kB' 'Inactive(file): 149428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1198092 kB' 'Mapped: 83520 kB' 'AnonPages: 121724 kB' 'Shmem: 825032 kB' 'KernelStack: 6488 kB' 'PageTables: 3168 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130792 kB' 'Slab: 271104 kB' 'SReclaimable: 130792 kB' 'SUnreclaim: 140312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.902 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.902 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # continue 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # IFS=': ' 00:03:45.903 11:55:52 -- setup/common.sh@31 -- # read -r var val _ 00:03:45.903 11:55:52 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:45.903 11:55:52 -- setup/common.sh@33 -- # echo 0 00:03:45.903 11:55:52 -- setup/common.sh@33 -- # return 0 00:03:45.903 11:55:52 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:45.903 11:55:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.903 11:55:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.903 11:55:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.903 11:55:52 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:45.903 node0=512 expecting 513 00:03:45.903 11:55:52 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:45.903 11:55:52 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:45.903 11:55:52 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:45.903 11:55:52 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:45.903 node1=513 expecting 512 00:03:45.903 11:55:52 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:45.903 00:03:45.903 real 0m3.761s 00:03:45.903 user 0m1.461s 00:03:45.903 sys 0m2.405s 00:03:45.903 11:55:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:45.903 11:55:52 -- common/autotest_common.sh@10 -- # set +x 00:03:45.903 ************************************ 00:03:45.903 END TEST odd_alloc 00:03:45.903 ************************************ 00:03:45.903 11:55:53 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:45.903 11:55:53 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:45.903 11:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:45.903 11:55:53 -- common/autotest_common.sh@10 -- # set +x 00:03:45.903 ************************************ 00:03:45.903 START TEST custom_alloc 00:03:45.903 ************************************ 00:03:45.903 11:55:53 -- common/autotest_common.sh@1104 -- # custom_alloc 00:03:45.903 11:55:53 -- setup/hugepages.sh@167 -- # local IFS=, 00:03:45.903 11:55:53 -- setup/hugepages.sh@169 -- # local node 00:03:45.903 11:55:53 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:45.903 11:55:53 -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:45.903 11:55:53 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:45.903 11:55:53 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:45.903 11:55:53 -- setup/hugepages.sh@49 -- # local size=1048576 00:03:45.903 11:55:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:45.903 11:55:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.903 11:55:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.903 11:55:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.903 11:55:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:45.903 11:55:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.903 11:55:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.903 11:55:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.903 11:55:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.903 11:55:53 -- setup/hugepages.sh@83 -- # : 256 00:03:45.903 11:55:53 -- setup/hugepages.sh@84 -- # : 1 00:03:45.903 11:55:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:45.903 11:55:53 -- setup/hugepages.sh@83 -- # : 0 00:03:45.903 11:55:53 -- setup/hugepages.sh@84 -- # : 0 00:03:45.903 11:55:53 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:45.903 11:55:53 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:45.903 11:55:53 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:45.903 11:55:53 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:45.903 11:55:53 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:45.903 11:55:53 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:45.903 11:55:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.903 11:55:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.903 11:55:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.903 11:55:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.903 11:55:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.904 11:55:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.904 11:55:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.904 11:55:53 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.904 11:55:53 -- setup/hugepages.sh@78 -- # return 0 00:03:45.904 11:55:53 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:45.904 11:55:53 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.904 11:55:53 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.904 11:55:53 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:45.904 11:55:53 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:45.904 11:55:53 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:45.904 11:55:53 -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:45.904 11:55:53 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:45.904 11:55:53 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:45.904 11:55:53 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:45.904 11:55:53 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:45.904 11:55:53 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:45.904 11:55:53 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:45.904 11:55:53 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.904 11:55:53 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:45.904 11:55:53 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:45.904 11:55:53 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:45.904 11:55:53 -- setup/hugepages.sh@78 -- # return 0 00:03:45.904 11:55:53 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:45.904 11:55:53 -- setup/hugepages.sh@187 -- # setup output 00:03:45.904 11:55:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:45.904 11:55:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:49.191 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:49.191 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:49.191 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:49.191 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:49.191 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:49.191 11:55:56 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:49.191 11:55:56 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:49.191 11:55:56 -- setup/hugepages.sh@89 -- # local node 00:03:49.191 11:55:56 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:49.191 11:55:56 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:49.191 11:55:56 -- setup/hugepages.sh@92 -- # local surp 00:03:49.191 11:55:56 -- setup/hugepages.sh@93 -- # local resv 00:03:49.191 11:55:56 -- setup/hugepages.sh@94 -- # local anon 00:03:49.191 11:55:56 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:49.191 11:55:56 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:49.191 11:55:56 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:49.191 11:55:56 -- setup/common.sh@18 -- # local node= 00:03:49.191 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.191 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.191 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.191 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.191 11:55:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.191 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.191 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 72258300 kB' 'MemAvailable: 76018088 kB' 'Buffers: 12460 kB' 'Cached: 14703516 kB' 'SwapCached: 0 kB' 'Active: 11572992 kB' 'Inactive: 3646404 kB' 'Active(anon): 11100284 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506108 kB' 'Mapped: 186296 kB' 'Shmem: 10596864 kB' 'KReclaimable: 476660 kB' 'Slab: 845532 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368872 kB' 'KernelStack: 16496 kB' 'PageTables: 10572 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962516 kB' 'Committed_AS: 12506436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201292 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.191 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.191 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:49.192 11:55:56 -- setup/common.sh@33 -- # echo 0 00:03:49.192 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.192 11:55:56 -- setup/hugepages.sh@97 -- # anon=0 00:03:49.192 11:55:56 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:49.192 11:55:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.192 11:55:56 -- setup/common.sh@18 -- # local node= 00:03:49.192 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.192 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.192 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.192 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.192 11:55:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.192 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.192 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 72257596 kB' 'MemAvailable: 76017384 kB' 'Buffers: 12460 kB' 'Cached: 14703520 kB' 'SwapCached: 0 kB' 'Active: 11573224 kB' 'Inactive: 3646404 kB' 'Active(anon): 11100516 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 506432 kB' 'Mapped: 186344 kB' 'Shmem: 10596868 kB' 'KReclaimable: 476660 kB' 'Slab: 845544 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368884 kB' 'KernelStack: 16656 kB' 'PageTables: 11280 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962516 kB' 'Committed_AS: 12506448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201276 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.192 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.192 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.193 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.193 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.194 11:55:56 -- setup/common.sh@33 -- # echo 0 00:03:49.194 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.194 11:55:56 -- setup/hugepages.sh@99 -- # surp=0 00:03:49.194 11:55:56 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:49.194 11:55:56 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:49.194 11:55:56 -- setup/common.sh@18 -- # local node= 00:03:49.194 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.194 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.194 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.194 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.194 11:55:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.194 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.194 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 72256492 kB' 'MemAvailable: 76016280 kB' 'Buffers: 12460 kB' 'Cached: 14703532 kB' 'SwapCached: 0 kB' 'Active: 11572624 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099916 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505840 kB' 'Mapped: 186344 kB' 'Shmem: 10596880 kB' 'KReclaimable: 476660 kB' 'Slab: 845544 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368884 kB' 'KernelStack: 16288 kB' 'PageTables: 10296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962516 kB' 'Committed_AS: 12506460 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201228 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.194 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.194 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:49.195 11:55:56 -- setup/common.sh@33 -- # echo 0 00:03:49.195 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.195 11:55:56 -- setup/hugepages.sh@100 -- # resv=0 00:03:49.195 11:55:56 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:49.195 nr_hugepages=1536 00:03:49.195 11:55:56 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:49.195 resv_hugepages=0 00:03:49.195 11:55:56 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:49.195 surplus_hugepages=0 00:03:49.195 11:55:56 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:49.195 anon_hugepages=0 00:03:49.195 11:55:56 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:49.195 11:55:56 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:49.195 11:55:56 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:49.195 11:55:56 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:49.195 11:55:56 -- setup/common.sh@18 -- # local node= 00:03:49.195 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.195 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.195 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.195 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:49.195 11:55:56 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:49.195 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.195 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 72257192 kB' 'MemAvailable: 76016980 kB' 'Buffers: 12460 kB' 'Cached: 14703532 kB' 'SwapCached: 0 kB' 'Active: 11572704 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099996 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505916 kB' 'Mapped: 186344 kB' 'Shmem: 10596880 kB' 'KReclaimable: 476660 kB' 'Slab: 845544 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368884 kB' 'KernelStack: 16352 kB' 'PageTables: 10728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 52962516 kB' 'Committed_AS: 12506476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201308 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.195 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.195 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.196 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.196 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.456 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.456 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:49.457 11:55:56 -- setup/common.sh@33 -- # echo 1536 00:03:49.457 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.457 11:55:56 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:49.457 11:55:56 -- setup/hugepages.sh@112 -- # get_nodes 00:03:49.457 11:55:56 -- setup/hugepages.sh@27 -- # local node 00:03:49.457 11:55:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.457 11:55:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:49.457 11:55:56 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:49.457 11:55:56 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:49.457 11:55:56 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:49.457 11:55:56 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:49.457 11:55:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.457 11:55:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.457 11:55:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:49.457 11:55:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.457 11:55:56 -- setup/common.sh@18 -- # local node=0 00:03:49.457 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.457 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.457 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.457 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:49.457 11:55:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:49.457 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.457 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.457 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 32069568 kB' 'MemUsed: 16000364 kB' 'SwapCached: 0 kB' 'Active: 10401020 kB' 'Inactive: 3496976 kB' 'Active(anon): 10151944 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517828 kB' 'Mapped: 102824 kB' 'AnonPages: 383264 kB' 'Shmem: 9771776 kB' 'KernelStack: 9576 kB' 'PageTables: 6324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574300 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228432 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.457 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.457 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@33 -- # echo 0 00:03:49.458 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.458 11:55:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.458 11:55:56 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:49.458 11:55:56 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:49.458 11:55:56 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:49.458 11:55:56 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:49.458 11:55:56 -- setup/common.sh@18 -- # local node=1 00:03:49.458 11:55:56 -- setup/common.sh@19 -- # local var val 00:03:49.458 11:55:56 -- setup/common.sh@20 -- # local mem_f mem 00:03:49.458 11:55:56 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:49.458 11:55:56 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:49.458 11:55:56 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:49.458 11:55:56 -- setup/common.sh@28 -- # mapfile -t mem 00:03:49.458 11:55:56 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44223620 kB' 'MemFree: 40187212 kB' 'MemUsed: 4036408 kB' 'SwapCached: 0 kB' 'Active: 1170628 kB' 'Inactive: 149428 kB' 'Active(anon): 946996 kB' 'Inactive(anon): 0 kB' 'Active(file): 223632 kB' 'Inactive(file): 149428 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1198204 kB' 'Mapped: 83520 kB' 'AnonPages: 121948 kB' 'Shmem: 825144 kB' 'KernelStack: 6488 kB' 'PageTables: 3128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 130792 kB' 'Slab: 271236 kB' 'SReclaimable: 130792 kB' 'SUnreclaim: 140444 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.458 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.458 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # continue 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # IFS=': ' 00:03:49.459 11:55:56 -- setup/common.sh@31 -- # read -r var val _ 00:03:49.459 11:55:56 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:49.459 11:55:56 -- setup/common.sh@33 -- # echo 0 00:03:49.459 11:55:56 -- setup/common.sh@33 -- # return 0 00:03:49.459 11:55:56 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:49.459 11:55:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.459 11:55:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.459 11:55:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.459 11:55:56 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:49.459 node0=512 expecting 512 00:03:49.459 11:55:56 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:49.459 11:55:56 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:49.459 11:55:56 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:49.459 11:55:56 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:49.459 node1=1024 expecting 1024 00:03:49.459 11:55:56 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:49.459 00:03:49.459 real 0m3.562s 00:03:49.459 user 0m1.381s 00:03:49.459 sys 0m2.243s 00:03:49.459 11:55:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:49.459 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:49.459 ************************************ 00:03:49.459 END TEST custom_alloc 00:03:49.459 ************************************ 00:03:49.459 11:55:56 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:49.459 11:55:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:49.459 11:55:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:49.459 11:55:56 -- common/autotest_common.sh@10 -- # set +x 00:03:49.459 ************************************ 00:03:49.459 START TEST no_shrink_alloc 00:03:49.459 ************************************ 00:03:49.459 11:55:56 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:03:49.459 11:55:56 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:49.459 11:55:56 -- setup/hugepages.sh@49 -- # local size=2097152 00:03:49.459 11:55:56 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:49.459 11:55:56 -- setup/hugepages.sh@51 -- # shift 00:03:49.459 11:55:56 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:49.459 11:55:56 -- setup/hugepages.sh@52 -- # local node_ids 00:03:49.459 11:55:56 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:49.459 11:55:56 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:49.459 11:55:56 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:49.459 11:55:56 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:49.459 11:55:56 -- setup/hugepages.sh@62 -- # local user_nodes 00:03:49.459 11:55:56 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:49.459 11:55:56 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:49.459 11:55:56 -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:49.460 11:55:56 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:49.460 11:55:56 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:49.460 11:55:56 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:49.460 11:55:56 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:49.460 11:55:56 -- setup/hugepages.sh@73 -- # return 0 00:03:49.460 11:55:56 -- setup/hugepages.sh@198 -- # setup output 00:03:49.460 11:55:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:49.460 11:55:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:52.748 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:52.748 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:52.748 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:52.748 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:52.748 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:52.748 11:55:59 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:52.748 11:55:59 -- setup/hugepages.sh@89 -- # local node 00:03:52.748 11:55:59 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:52.748 11:55:59 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:52.748 11:55:59 -- setup/hugepages.sh@92 -- # local surp 00:03:52.748 11:55:59 -- setup/hugepages.sh@93 -- # local resv 00:03:52.748 11:55:59 -- setup/hugepages.sh@94 -- # local anon 00:03:52.748 11:55:59 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:52.748 11:55:59 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:52.748 11:55:59 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:52.748 11:55:59 -- setup/common.sh@18 -- # local node= 00:03:52.748 11:55:59 -- setup/common.sh@19 -- # local var val 00:03:52.748 11:55:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.748 11:55:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.748 11:55:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.748 11:55:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.748 11:55:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.748 11:55:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73281832 kB' 'MemAvailable: 77041620 kB' 'Buffers: 12460 kB' 'Cached: 14703632 kB' 'SwapCached: 0 kB' 'Active: 11570912 kB' 'Inactive: 3646404 kB' 'Active(anon): 11098204 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503964 kB' 'Mapped: 186392 kB' 'Shmem: 10596980 kB' 'KReclaimable: 476660 kB' 'Slab: 845484 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368824 kB' 'KernelStack: 15712 kB' 'PageTables: 8356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12502764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201004 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.748 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.748 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.749 11:55:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:52.749 11:55:59 -- setup/common.sh@33 -- # echo 0 00:03:52.749 11:55:59 -- setup/common.sh@33 -- # return 0 00:03:52.749 11:55:59 -- setup/hugepages.sh@97 -- # anon=0 00:03:52.749 11:55:59 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:52.749 11:55:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.749 11:55:59 -- setup/common.sh@18 -- # local node= 00:03:52.749 11:55:59 -- setup/common.sh@19 -- # local var val 00:03:52.749 11:55:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.749 11:55:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.749 11:55:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.749 11:55:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.749 11:55:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.749 11:55:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.749 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73282500 kB' 'MemAvailable: 77042288 kB' 'Buffers: 12460 kB' 'Cached: 14703632 kB' 'SwapCached: 0 kB' 'Active: 11570544 kB' 'Inactive: 3646404 kB' 'Active(anon): 11097836 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504072 kB' 'Mapped: 186276 kB' 'Shmem: 10596980 kB' 'KReclaimable: 476660 kB' 'Slab: 845424 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368764 kB' 'KernelStack: 15680 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12502776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200972 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.750 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.750 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.751 11:55:59 -- setup/common.sh@33 -- # echo 0 00:03:52.751 11:55:59 -- setup/common.sh@33 -- # return 0 00:03:52.751 11:55:59 -- setup/hugepages.sh@99 -- # surp=0 00:03:52.751 11:55:59 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:52.751 11:55:59 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:52.751 11:55:59 -- setup/common.sh@18 -- # local node= 00:03:52.751 11:55:59 -- setup/common.sh@19 -- # local var val 00:03:52.751 11:55:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.751 11:55:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.751 11:55:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.751 11:55:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.751 11:55:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.751 11:55:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73282500 kB' 'MemAvailable: 77042288 kB' 'Buffers: 12460 kB' 'Cached: 14703644 kB' 'SwapCached: 0 kB' 'Active: 11570552 kB' 'Inactive: 3646404 kB' 'Active(anon): 11097844 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 504076 kB' 'Mapped: 186276 kB' 'Shmem: 10596992 kB' 'KReclaimable: 476660 kB' 'Slab: 845424 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368764 kB' 'KernelStack: 15680 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12502788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200972 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.751 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.751 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.752 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.752 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:52.753 11:55:59 -- setup/common.sh@33 -- # echo 0 00:03:52.753 11:55:59 -- setup/common.sh@33 -- # return 0 00:03:52.753 11:55:59 -- setup/hugepages.sh@100 -- # resv=0 00:03:52.753 11:55:59 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:52.753 nr_hugepages=1024 00:03:52.753 11:55:59 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:52.753 resv_hugepages=0 00:03:52.753 11:55:59 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:52.753 surplus_hugepages=0 00:03:52.753 11:55:59 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:52.753 anon_hugepages=0 00:03:52.753 11:55:59 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.753 11:55:59 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:52.753 11:55:59 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:52.753 11:55:59 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:52.753 11:55:59 -- setup/common.sh@18 -- # local node= 00:03:52.753 11:55:59 -- setup/common.sh@19 -- # local var val 00:03:52.753 11:55:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.753 11:55:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.753 11:55:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:52.753 11:55:59 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:52.753 11:55:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.753 11:55:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.753 11:55:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73282248 kB' 'MemAvailable: 77042036 kB' 'Buffers: 12460 kB' 'Cached: 14703672 kB' 'SwapCached: 0 kB' 'Active: 11570212 kB' 'Inactive: 3646404 kB' 'Active(anon): 11097504 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 503680 kB' 'Mapped: 186276 kB' 'Shmem: 10597020 kB' 'KReclaimable: 476660 kB' 'Slab: 845424 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 368764 kB' 'KernelStack: 15664 kB' 'PageTables: 8196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12502804 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 200988 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.753 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.753 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.754 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.754 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:52.754 11:55:59 -- setup/common.sh@33 -- # echo 1024 00:03:52.754 11:55:59 -- setup/common.sh@33 -- # return 0 00:03:52.754 11:55:59 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:52.754 11:55:59 -- setup/hugepages.sh@112 -- # get_nodes 00:03:52.754 11:55:59 -- setup/hugepages.sh@27 -- # local node 00:03:52.754 11:55:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.754 11:55:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:52.754 11:55:59 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:52.754 11:55:59 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:52.754 11:55:59 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:52.754 11:55:59 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:52.754 11:55:59 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:52.754 11:55:59 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:52.754 11:55:59 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:52.754 11:55:59 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:52.754 11:55:59 -- setup/common.sh@18 -- # local node=0 00:03:52.754 11:55:59 -- setup/common.sh@19 -- # local var val 00:03:52.755 11:55:59 -- setup/common.sh@20 -- # local mem_f mem 00:03:52.755 11:55:59 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:52.755 11:55:59 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:52.755 11:55:59 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:52.755 11:55:59 -- setup/common.sh@28 -- # mapfile -t mem 00:03:52.755 11:55:59 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:52.755 11:55:59 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 31016356 kB' 'MemUsed: 17053576 kB' 'SwapCached: 0 kB' 'Active: 10399496 kB' 'Inactive: 3496976 kB' 'Active(anon): 10150420 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517836 kB' 'Mapped: 102756 kB' 'AnonPages: 381736 kB' 'Shmem: 9771784 kB' 'KernelStack: 9224 kB' 'PageTables: 5236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574324 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.755 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.755 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.756 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.756 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.756 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.756 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.756 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.756 11:55:59 -- setup/common.sh@32 -- # continue 00:03:52.756 11:55:59 -- setup/common.sh@31 -- # IFS=': ' 00:03:52.756 11:55:59 -- setup/common.sh@31 -- # read -r var val _ 00:03:52.756 11:55:59 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:52.756 11:55:59 -- setup/common.sh@33 -- # echo 0 00:03:52.756 11:55:59 -- setup/common.sh@33 -- # return 0 00:03:52.756 11:55:59 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:52.756 11:55:59 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:52.756 11:55:59 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:52.756 11:55:59 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:52.756 11:55:59 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:52.756 node0=1024 expecting 1024 00:03:52.756 11:55:59 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:52.756 11:55:59 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:52.756 11:55:59 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:52.756 11:55:59 -- setup/hugepages.sh@202 -- # setup output 00:03:52.756 11:55:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:52.756 11:55:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:03:56.944 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:03:56.944 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:03:56.944 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:5e:00.0 (8086 0b60): Already using the vfio-pci driver 00:03:56.944 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:56.944 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:56.944 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:56.944 11:56:03 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:56.944 11:56:03 -- setup/hugepages.sh@89 -- # local node 00:03:56.944 11:56:03 -- setup/hugepages.sh@90 -- # local sorted_t 00:03:56.944 11:56:03 -- setup/hugepages.sh@91 -- # local sorted_s 00:03:56.944 11:56:03 -- setup/hugepages.sh@92 -- # local surp 00:03:56.944 11:56:03 -- setup/hugepages.sh@93 -- # local resv 00:03:56.944 11:56:03 -- setup/hugepages.sh@94 -- # local anon 00:03:56.944 11:56:03 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:56.944 11:56:03 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:56.944 11:56:03 -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:56.944 11:56:03 -- setup/common.sh@18 -- # local node= 00:03:56.944 11:56:03 -- setup/common.sh@19 -- # local var val 00:03:56.944 11:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.944 11:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.944 11:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.944 11:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.944 11:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.944 11:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.944 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.944 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73274228 kB' 'MemAvailable: 77034016 kB' 'Buffers: 12460 kB' 'Cached: 14703744 kB' 'SwapCached: 0 kB' 'Active: 11572744 kB' 'Inactive: 3646404 kB' 'Active(anon): 11100036 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505936 kB' 'Mapped: 186392 kB' 'Shmem: 10597092 kB' 'KReclaimable: 476660 kB' 'Slab: 845724 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369064 kB' 'KernelStack: 15696 kB' 'PageTables: 8328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12503272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201052 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.945 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.945 11:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:56.946 11:56:03 -- setup/common.sh@33 -- # echo 0 00:03:56.946 11:56:03 -- setup/common.sh@33 -- # return 0 00:03:56.946 11:56:03 -- setup/hugepages.sh@97 -- # anon=0 00:03:56.946 11:56:03 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:56.946 11:56:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.946 11:56:03 -- setup/common.sh@18 -- # local node= 00:03:56.946 11:56:03 -- setup/common.sh@19 -- # local var val 00:03:56.946 11:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.946 11:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.946 11:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.946 11:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.946 11:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.946 11:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73275444 kB' 'MemAvailable: 77035232 kB' 'Buffers: 12460 kB' 'Cached: 14703744 kB' 'SwapCached: 0 kB' 'Active: 11572160 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099452 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505736 kB' 'Mapped: 186288 kB' 'Shmem: 10597092 kB' 'KReclaimable: 476660 kB' 'Slab: 845744 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 15680 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12503284 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201020 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.946 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.946 11:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.947 11:56:03 -- setup/common.sh@33 -- # echo 0 00:03:56.947 11:56:03 -- setup/common.sh@33 -- # return 0 00:03:56.947 11:56:03 -- setup/hugepages.sh@99 -- # surp=0 00:03:56.947 11:56:03 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:56.947 11:56:03 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:56.947 11:56:03 -- setup/common.sh@18 -- # local node= 00:03:56.947 11:56:03 -- setup/common.sh@19 -- # local var val 00:03:56.947 11:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.947 11:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.947 11:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.947 11:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.947 11:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.947 11:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73275936 kB' 'MemAvailable: 77035724 kB' 'Buffers: 12460 kB' 'Cached: 14703756 kB' 'SwapCached: 0 kB' 'Active: 11572180 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099472 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505740 kB' 'Mapped: 186288 kB' 'Shmem: 10597104 kB' 'KReclaimable: 476660 kB' 'Slab: 845744 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 15680 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12503296 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201036 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.947 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.947 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.948 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.948 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:56.948 11:56:03 -- setup/common.sh@33 -- # echo 0 00:03:56.948 11:56:03 -- setup/common.sh@33 -- # return 0 00:03:56.948 11:56:03 -- setup/hugepages.sh@100 -- # resv=0 00:03:56.948 11:56:03 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:56.948 nr_hugepages=1024 00:03:56.948 11:56:03 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:56.948 resv_hugepages=0 00:03:56.948 11:56:03 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:56.948 surplus_hugepages=0 00:03:56.948 11:56:03 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:56.948 anon_hugepages=0 00:03:56.948 11:56:03 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.948 11:56:03 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:56.949 11:56:03 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:56.949 11:56:03 -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:56.949 11:56:03 -- setup/common.sh@18 -- # local node= 00:03:56.949 11:56:03 -- setup/common.sh@19 -- # local var val 00:03:56.949 11:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.949 11:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.949 11:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:56.949 11:56:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:56.949 11:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.949 11:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 92293552 kB' 'MemFree: 73275936 kB' 'MemAvailable: 77035724 kB' 'Buffers: 12460 kB' 'Cached: 14703772 kB' 'SwapCached: 0 kB' 'Active: 11572184 kB' 'Inactive: 3646404 kB' 'Active(anon): 11099476 kB' 'Inactive(anon): 0 kB' 'Active(file): 472708 kB' 'Inactive(file): 3646404 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505732 kB' 'Mapped: 186288 kB' 'Shmem: 10597120 kB' 'KReclaimable: 476660 kB' 'Slab: 845744 kB' 'SReclaimable: 476660 kB' 'SUnreclaim: 369084 kB' 'KernelStack: 15680 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 53486804 kB' 'Committed_AS: 12503312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 201036 kB' 'VmallocChunk: 0 kB' 'Percpu: 75840 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1230292 kB' 'DirectMap2M: 20465664 kB' 'DirectMap1G: 79691776 kB' 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.949 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.949 11:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:56.950 11:56:03 -- setup/common.sh@33 -- # echo 1024 00:03:56.950 11:56:03 -- setup/common.sh@33 -- # return 0 00:03:56.950 11:56:03 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:56.950 11:56:03 -- setup/hugepages.sh@112 -- # get_nodes 00:03:56.950 11:56:03 -- setup/hugepages.sh@27 -- # local node 00:03:56.950 11:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.950 11:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:56.950 11:56:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:56.950 11:56:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:56.950 11:56:03 -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:56.950 11:56:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:56.950 11:56:03 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:56.950 11:56:03 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:56.950 11:56:03 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:56.950 11:56:03 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:56.950 11:56:03 -- setup/common.sh@18 -- # local node=0 00:03:56.950 11:56:03 -- setup/common.sh@19 -- # local var val 00:03:56.950 11:56:03 -- setup/common.sh@20 -- # local mem_f mem 00:03:56.950 11:56:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:56.950 11:56:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:56.950 11:56:03 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:56.950 11:56:03 -- setup/common.sh@28 -- # mapfile -t mem 00:03:56.950 11:56:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 48069932 kB' 'MemFree: 31009572 kB' 'MemUsed: 17060360 kB' 'SwapCached: 0 kB' 'Active: 10401056 kB' 'Inactive: 3496976 kB' 'Active(anon): 10151980 kB' 'Inactive(anon): 0 kB' 'Active(file): 249076 kB' 'Inactive(file): 3496976 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 13517868 kB' 'Mapped: 102768 kB' 'AnonPages: 383364 kB' 'Shmem: 9771816 kB' 'KernelStack: 9192 kB' 'PageTables: 5176 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 345868 kB' 'Slab: 574772 kB' 'SReclaimable: 345868 kB' 'SUnreclaim: 228904 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.950 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.950 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # continue 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # IFS=': ' 00:03:56.951 11:56:03 -- setup/common.sh@31 -- # read -r var val _ 00:03:56.951 11:56:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:56.951 11:56:03 -- setup/common.sh@33 -- # echo 0 00:03:56.951 11:56:03 -- setup/common.sh@33 -- # return 0 00:03:56.951 11:56:03 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:56.951 11:56:03 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:56.951 11:56:03 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:56.951 11:56:03 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:56.951 11:56:03 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:56.951 node0=1024 expecting 1024 00:03:56.951 11:56:03 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:56.951 00:03:56.951 real 0m7.164s 00:03:56.951 user 0m2.629s 00:03:56.951 sys 0m4.627s 00:03:56.951 11:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.951 11:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:56.951 ************************************ 00:03:56.951 END TEST no_shrink_alloc 00:03:56.951 ************************************ 00:03:56.951 11:56:03 -- setup/hugepages.sh@217 -- # clear_hp 00:03:56.951 11:56:03 -- setup/hugepages.sh@37 -- # local node hp 00:03:56.951 11:56:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.951 11:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.951 11:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.951 11:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.951 11:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.951 11:56:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:56.951 11:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.951 11:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.951 11:56:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:56.951 11:56:03 -- setup/hugepages.sh@41 -- # echo 0 00:03:56.951 11:56:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:56.951 11:56:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:56.951 00:03:56.951 real 0m27.246s 00:03:56.951 user 0m9.743s 00:03:56.951 sys 0m16.770s 00:03:56.951 11:56:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:56.951 11:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:56.951 ************************************ 00:03:56.951 END TEST hugepages 00:03:56.951 ************************************ 00:03:56.951 11:56:03 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:56.951 11:56:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:03:56.951 11:56:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:03:56.951 11:56:03 -- common/autotest_common.sh@10 -- # set +x 00:03:56.951 ************************************ 00:03:56.951 START TEST driver 00:03:56.951 ************************************ 00:03:56.951 11:56:03 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/driver.sh 00:03:56.951 * Looking for test storage... 00:03:56.951 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:03:56.951 11:56:03 -- setup/driver.sh@68 -- # setup reset 00:03:56.952 11:56:03 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:56.952 11:56:03 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:02.248 11:56:08 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:02.248 11:56:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:02.248 11:56:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:02.248 11:56:08 -- common/autotest_common.sh@10 -- # set +x 00:04:02.248 ************************************ 00:04:02.248 START TEST guess_driver 00:04:02.248 ************************************ 00:04:02.248 11:56:08 -- common/autotest_common.sh@1104 -- # guess_driver 00:04:02.248 11:56:08 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:02.248 11:56:08 -- setup/driver.sh@47 -- # local fail=0 00:04:02.248 11:56:08 -- setup/driver.sh@49 -- # pick_driver 00:04:02.248 11:56:08 -- setup/driver.sh@36 -- # vfio 00:04:02.248 11:56:08 -- setup/driver.sh@21 -- # local iommu_grups 00:04:02.248 11:56:08 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:02.248 11:56:08 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:02.248 11:56:08 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:02.248 11:56:08 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:02.248 11:56:08 -- setup/driver.sh@29 -- # (( 215 > 0 )) 00:04:02.248 11:56:08 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:02.248 11:56:08 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:02.248 11:56:08 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:02.248 11:56:08 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:02.248 11:56:08 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:02.248 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:02.248 11:56:08 -- setup/driver.sh@30 -- # return 0 00:04:02.248 11:56:08 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:02.248 11:56:08 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:02.248 11:56:08 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:02.248 11:56:08 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:02.248 Looking for driver=vfio-pci 00:04:02.248 11:56:08 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:02.248 11:56:08 -- setup/driver.sh@45 -- # setup output config 00:04:02.248 11:56:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.248 11:56:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ not == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # continue 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ not == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # continue 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:05.535 11:56:12 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:05.535 11:56:12 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:05.535 11:56:12 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.469 11:56:13 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:06.469 11:56:13 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:06.469 11:56:13 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:06.469 11:56:13 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:06.469 11:56:13 -- setup/driver.sh@65 -- # setup reset 00:04:06.469 11:56:13 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.469 11:56:13 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.735 00:04:11.735 real 0m9.728s 00:04:11.735 user 0m2.692s 00:04:11.735 sys 0m5.053s 00:04:11.735 11:56:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.735 11:56:18 -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 ************************************ 00:04:11.735 END TEST guess_driver 00:04:11.735 ************************************ 00:04:11.735 00:04:11.735 real 0m14.592s 00:04:11.735 user 0m4.074s 00:04:11.735 sys 0m7.810s 00:04:11.735 11:56:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.735 11:56:18 -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 ************************************ 00:04:11.735 END TEST driver 00:04:11.735 ************************************ 00:04:11.735 11:56:18 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:11.735 11:56:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:11.735 11:56:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:11.735 11:56:18 -- common/autotest_common.sh@10 -- # set +x 00:04:11.735 ************************************ 00:04:11.735 START TEST devices 00:04:11.735 ************************************ 00:04:11.735 11:56:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/devices.sh 00:04:11.735 * Looking for test storage... 00:04:11.735 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup 00:04:11.735 11:56:18 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:11.735 11:56:18 -- setup/devices.sh@192 -- # setup reset 00:04:11.735 11:56:18 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:11.735 11:56:18 -- setup/common.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:15.024 11:56:22 -- setup/devices.sh@194 -- # get_zoned_devs 00:04:15.024 11:56:22 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:04:15.024 11:56:22 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:04:15.024 11:56:22 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:04:15.024 11:56:22 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:04:15.024 11:56:22 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:04:15.024 11:56:22 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:04:15.024 11:56:22 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:15.024 11:56:22 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:04:15.024 11:56:22 -- setup/devices.sh@196 -- # blocks=() 00:04:15.024 11:56:22 -- setup/devices.sh@196 -- # declare -a blocks 00:04:15.024 11:56:22 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:15.024 11:56:22 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:15.024 11:56:22 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:15.024 11:56:22 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:15.024 11:56:22 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:15.024 11:56:22 -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:15.024 11:56:22 -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:04:15.024 11:56:22 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:04:15.024 11:56:22 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:15.024 11:56:22 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:04:15.024 11:56:22 -- scripts/common.sh@389 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:15.024 No valid GPT data, bailing 00:04:15.024 11:56:22 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:15.024 11:56:22 -- scripts/common.sh@393 -- # pt= 00:04:15.024 11:56:22 -- scripts/common.sh@394 -- # return 1 00:04:15.024 11:56:22 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:15.024 11:56:22 -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:15.024 11:56:22 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:15.024 11:56:22 -- setup/common.sh@80 -- # echo 3840755982336 00:04:15.024 11:56:22 -- setup/devices.sh@204 -- # (( 3840755982336 >= min_disk_size )) 00:04:15.024 11:56:22 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:15.024 11:56:22 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:04:15.024 11:56:22 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:15.024 11:56:22 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:15.024 11:56:22 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:15.024 11:56:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:15.024 11:56:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:15.024 11:56:22 -- common/autotest_common.sh@10 -- # set +x 00:04:15.024 ************************************ 00:04:15.024 START TEST nvme_mount 00:04:15.024 ************************************ 00:04:15.024 11:56:22 -- common/autotest_common.sh@1104 -- # nvme_mount 00:04:15.024 11:56:22 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:15.024 11:56:22 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:15.024 11:56:22 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:15.024 11:56:22 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:15.024 11:56:22 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:15.024 11:56:22 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:15.024 11:56:22 -- setup/common.sh@40 -- # local part_no=1 00:04:15.024 11:56:22 -- setup/common.sh@41 -- # local size=1073741824 00:04:15.024 11:56:22 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:15.024 11:56:22 -- setup/common.sh@44 -- # parts=() 00:04:15.024 11:56:22 -- setup/common.sh@44 -- # local parts 00:04:15.024 11:56:22 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:15.024 11:56:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.024 11:56:22 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:15.024 11:56:22 -- setup/common.sh@46 -- # (( part++ )) 00:04:15.024 11:56:22 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:15.024 11:56:22 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:15.024 11:56:22 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:15.024 11:56:22 -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:15.962 Creating new GPT entries in memory. 00:04:15.962 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:15.962 other utilities. 00:04:15.962 11:56:23 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:15.962 11:56:23 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:15.962 11:56:23 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:15.962 11:56:23 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:15.962 11:56:23 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:17.338 Creating new GPT entries in memory. 00:04:17.338 The operation has completed successfully. 00:04:17.338 11:56:24 -- setup/common.sh@57 -- # (( part++ )) 00:04:17.338 11:56:24 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:17.338 11:56:24 -- setup/common.sh@62 -- # wait 1161348 00:04:17.338 11:56:24 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.338 11:56:24 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:17.338 11:56:24 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.338 11:56:24 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:17.338 11:56:24 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:17.338 11:56:24 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.338 11:56:24 -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.338 11:56:24 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:17.338 11:56:24 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:17.338 11:56:24 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:17.338 11:56:24 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:17.338 11:56:24 -- setup/devices.sh@53 -- # local found=0 00:04:17.339 11:56:24 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:17.339 11:56:24 -- setup/devices.sh@56 -- # : 00:04:17.339 11:56:24 -- setup/devices.sh@59 -- # local pci status 00:04:17.339 11:56:24 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:17.339 11:56:24 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:17.339 11:56:24 -- setup/devices.sh@47 -- # setup output config 00:04:17.339 11:56:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:17.339 11:56:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:20.628 11:56:27 -- setup/devices.sh@63 -- # found=1 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.628 11:56:27 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:20.628 11:56:27 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:20.628 11:56:27 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.628 11:56:27 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.628 11:56:27 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.628 11:56:27 -- setup/devices.sh@110 -- # cleanup_nvme 00:04:20.628 11:56:27 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.628 11:56:27 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.628 11:56:27 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:20.628 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:20.628 11:56:27 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:20.628 11:56:27 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:20.888 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:20.888 /dev/nvme0n1: 8 bytes were erased at offset 0x37e3ee55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:20.888 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:20.888 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:20.888 11:56:28 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:20.888 11:56:28 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:20.888 11:56:28 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.888 11:56:28 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:20.888 11:56:28 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:20.888 11:56:28 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.888 11:56:28 -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.888 11:56:28 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:20.888 11:56:28 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:20.888 11:56:28 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:20.888 11:56:28 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:20.888 11:56:28 -- setup/devices.sh@53 -- # local found=0 00:04:20.888 11:56:28 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:20.888 11:56:28 -- setup/devices.sh@56 -- # : 00:04:20.888 11:56:28 -- setup/devices.sh@59 -- # local pci status 00:04:20.888 11:56:28 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:20.888 11:56:28 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:20.888 11:56:28 -- setup/devices.sh@47 -- # setup output config 00:04:20.888 11:56:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.888 11:56:28 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:24.176 11:56:31 -- setup/devices.sh@63 -- # found=1 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.176 11:56:31 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:24.176 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.434 11:56:31 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:24.434 11:56:31 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:24.434 11:56:31 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.435 11:56:31 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:24.435 11:56:31 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:24.435 11:56:31 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:24.435 11:56:31 -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:24.435 11:56:31 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:24.435 11:56:31 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:24.435 11:56:31 -- setup/devices.sh@50 -- # local mount_point= 00:04:24.435 11:56:31 -- setup/devices.sh@51 -- # local test_file= 00:04:24.435 11:56:31 -- setup/devices.sh@53 -- # local found=0 00:04:24.435 11:56:31 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:24.435 11:56:31 -- setup/devices.sh@59 -- # local pci status 00:04:24.435 11:56:31 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:24.435 11:56:31 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:24.435 11:56:31 -- setup/devices.sh@47 -- # setup output config 00:04:24.435 11:56:31 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:24.435 11:56:31 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:27.724 11:56:34 -- setup/devices.sh@63 -- # found=1 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.724 11:56:34 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:27.724 11:56:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:27.983 11:56:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:27.983 11:56:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:27.983 11:56:35 -- setup/devices.sh@68 -- # return 0 00:04:27.983 11:56:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:04:27.983 11:56:35 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:27.983 11:56:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:27.983 11:56:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:27.983 11:56:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:27.983 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:27.983 00:04:27.983 real 0m12.854s 00:04:27.983 user 0m3.725s 00:04:27.983 sys 0m7.123s 00:04:27.983 11:56:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:27.983 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.983 ************************************ 00:04:27.983 END TEST nvme_mount 00:04:27.983 ************************************ 00:04:27.983 11:56:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:27.983 11:56:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:04:27.983 11:56:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:04:27.983 11:56:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.983 ************************************ 00:04:27.983 START TEST dm_mount 00:04:27.983 ************************************ 00:04:27.983 11:56:35 -- common/autotest_common.sh@1104 -- # dm_mount 00:04:27.983 11:56:35 -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:27.983 11:56:35 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:27.983 11:56:35 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:27.983 11:56:35 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:27.983 11:56:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:27.983 11:56:35 -- setup/common.sh@40 -- # local part_no=2 00:04:27.983 11:56:35 -- setup/common.sh@41 -- # local size=1073741824 00:04:27.983 11:56:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:27.983 11:56:35 -- setup/common.sh@44 -- # parts=() 00:04:27.983 11:56:35 -- setup/common.sh@44 -- # local parts 00:04:27.983 11:56:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:04:27.984 11:56:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.984 11:56:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.984 11:56:35 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.984 11:56:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.984 11:56:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:27.984 11:56:35 -- setup/common.sh@46 -- # (( part++ )) 00:04:27.984 11:56:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:27.984 11:56:35 -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:27.984 11:56:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:27.984 11:56:35 -- setup/common.sh@53 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:28.940 Creating new GPT entries in memory. 00:04:28.940 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:28.940 other utilities. 00:04:28.940 11:56:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:04:28.940 11:56:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:28.940 11:56:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:28.940 11:56:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:28.940 11:56:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:29.886 Creating new GPT entries in memory. 00:04:29.886 The operation has completed successfully. 00:04:29.886 11:56:37 -- setup/common.sh@57 -- # (( part++ )) 00:04:29.886 11:56:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:29.886 11:56:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:29.886 11:56:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:29.886 11:56:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:31.261 The operation has completed successfully. 00:04:31.261 11:56:38 -- setup/common.sh@57 -- # (( part++ )) 00:04:31.261 11:56:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:31.261 11:56:38 -- setup/common.sh@62 -- # wait 1165521 00:04:31.261 11:56:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:31.261 11:56:38 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.261 11:56:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.261 11:56:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:31.261 11:56:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:04:31.261 11:56:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.261 11:56:38 -- setup/devices.sh@161 -- # break 00:04:31.261 11:56:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.261 11:56:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:31.261 11:56:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:04:31.261 11:56:38 -- setup/devices.sh@166 -- # dm=dm-0 00:04:31.261 11:56:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:04:31.261 11:56:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:04:31.261 11:56:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.261 11:56:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount size= 00:04:31.261 11:56:38 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.262 11:56:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:31.262 11:56:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:31.262 11:56:38 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.262 11:56:38 -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.262 11:56:38 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:31.262 11:56:38 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:31.262 11:56:38 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:31.262 11:56:38 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:31.262 11:56:38 -- setup/devices.sh@53 -- # local found=0 00:04:31.262 11:56:38 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:31.262 11:56:38 -- setup/devices.sh@56 -- # : 00:04:31.262 11:56:38 -- setup/devices.sh@59 -- # local pci status 00:04:31.262 11:56:38 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:31.262 11:56:38 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:31.262 11:56:38 -- setup/devices.sh@47 -- # setup output config 00:04:31.262 11:56:38 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.262 11:56:38 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:34.543 11:56:41 -- setup/devices.sh@63 -- # found=1 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.543 11:56:41 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:34.543 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.802 11:56:41 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:34.802 11:56:41 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:34.802 11:56:41 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:34.802 11:56:41 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:34.802 11:56:41 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:34.802 11:56:41 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:34.802 11:56:41 -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:04:34.802 11:56:41 -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:34.802 11:56:41 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:04:34.802 11:56:41 -- setup/devices.sh@50 -- # local mount_point= 00:04:34.802 11:56:41 -- setup/devices.sh@51 -- # local test_file= 00:04:34.802 11:56:41 -- setup/devices.sh@53 -- # local found=0 00:04:34.803 11:56:41 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:34.803 11:56:41 -- setup/devices.sh@59 -- # local pci status 00:04:34.803 11:56:41 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:34.803 11:56:41 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:34.803 11:56:41 -- setup/devices.sh@47 -- # setup output config 00:04:34.803 11:56:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.803 11:56:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh config 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:04:38.091 11:56:45 -- setup/devices.sh@63 -- # found=1 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:85:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@62 -- # [[ 0000:ae:05.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:38.091 11:56:45 -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:38.091 11:56:45 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:38.091 11:56:45 -- setup/devices.sh@68 -- # return 0 00:04:38.091 11:56:45 -- setup/devices.sh@187 -- # cleanup_dm 00:04:38.091 11:56:45 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.091 11:56:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.091 11:56:45 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:38.091 11:56:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:38.091 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:38.091 11:56:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:38.091 00:04:38.091 real 0m10.194s 00:04:38.091 user 0m2.495s 00:04:38.091 sys 0m4.836s 00:04:38.091 11:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.091 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.091 ************************************ 00:04:38.091 END TEST dm_mount 00:04:38.091 ************************************ 00:04:38.091 11:56:45 -- setup/devices.sh@1 -- # cleanup 00:04:38.091 11:56:45 -- setup/devices.sh@11 -- # cleanup_nvme 00:04:38.091 11:56:45 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/nvme_mount 00:04:38.091 11:56:45 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:38.091 11:56:45 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.091 11:56:45 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:38.349 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:38.349 /dev/nvme0n1: 8 bytes were erased at offset 0x37e3ee55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:38.349 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:38.350 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:38.350 11:56:45 -- setup/devices.sh@12 -- # cleanup_dm 00:04:38.350 11:56:45 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/crypto-phy-autotest/spdk/test/setup/dm_mount 00:04:38.350 11:56:45 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:38.350 11:56:45 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:38.350 11:56:45 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:38.350 11:56:45 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:38.350 11:56:45 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:38.608 00:04:38.608 real 0m27.122s 00:04:38.608 user 0m7.521s 00:04:38.608 sys 0m14.521s 00:04:38.608 11:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.608 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.608 ************************************ 00:04:38.608 END TEST devices 00:04:38.608 ************************************ 00:04:38.608 00:04:38.608 real 1m33.637s 00:04:38.608 user 0m29.205s 00:04:38.608 sys 0m54.211s 00:04:38.608 11:56:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.608 11:56:45 -- common/autotest_common.sh@10 -- # set +x 00:04:38.608 ************************************ 00:04:38.608 END TEST setup.sh 00:04:38.608 ************************************ 00:04:38.608 11:56:45 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh status 00:04:41.897 Hugepages 00:04:41.897 node hugesize free / total 00:04:42.156 node0 1048576kB 0 / 0 00:04:42.156 node0 2048kB 1024 / 1024 00:04:42.156 node1 1048576kB 0 / 0 00:04:42.156 node1 2048kB 1024 / 1024 00:04:42.156 00:04:42.156 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.156 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:42.156 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:42.156 NVMe 0000:5e:00.0 8086 0b60 0 nvme nvme0 nvme0n1 00:04:42.156 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:42.156 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:42.156 VMD 0000:85:05.5 8086 201d 1 - - - 00:04:42.156 VMD 0000:ae:05.5 8086 201d 1 - - - 00:04:42.156 11:56:49 -- spdk/autotest.sh@141 -- # uname -s 00:04:42.156 11:56:49 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:04:42.156 11:56:49 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:04:42.156 11:56:49 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:46.348 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:04:46.348 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:04:46.348 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:46.348 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.284 0000:5e:00.0 (8086 0b60): nvme -> vfio-pci 00:04:47.284 11:56:54 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:48.220 11:56:55 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:48.220 11:56:55 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:48.220 11:56:55 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:04:48.220 11:56:55 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:04:48.220 11:56:55 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:48.220 11:56:55 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:48.220 11:56:55 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:48.220 11:56:55 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:48.220 11:56:55 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:48.220 11:56:55 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:48.220 11:56:55 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:48.220 11:56:55 -- common/autotest_common.sh@1521 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.411 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:04:52.411 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:04:52.411 Waiting for block devices as requested 00:04:52.411 0000:5e:00.0 (8086 0b60): vfio-pci -> nvme 00:04:52.411 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:52.411 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:52.670 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.670 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.670 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.928 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.928 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.928 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.187 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.187 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.187 11:57:00 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:04:53.187 11:57:00 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:53.187 11:57:00 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:53.188 11:57:00 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:53.188 11:57:00 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:04:53.188 11:57:00 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:04:53.188 11:57:00 -- common/autotest_common.sh@1530 -- # grep oacs 00:04:53.188 11:57:00 -- common/autotest_common.sh@1530 -- # oacs=' 0x1e' 00:04:53.188 11:57:00 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:04:53.188 11:57:00 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:04:53.188 11:57:00 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:04:53.188 11:57:00 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:04:53.188 11:57:00 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:04:53.188 11:57:00 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:04:53.188 11:57:00 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:04:53.188 11:57:00 -- common/autotest_common.sh@1542 -- # continue 00:04:53.188 11:57:00 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:04:53.188 11:57:00 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:53.188 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.447 11:57:00 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:04:53.447 11:57:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:53.447 11:57:00 -- common/autotest_common.sh@10 -- # set +x 00:04:53.447 11:57:00 -- spdk/autotest.sh@150 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/setup.sh 00:04:57.668 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:04:57.668 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:04:57.668 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:57.668 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:58.605 0000:5e:00.0 (8086 0b60): nvme -> vfio-pci 00:04:58.605 11:57:05 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:04:58.605 11:57:05 -- common/autotest_common.sh@718 -- # xtrace_disable 00:04:58.605 11:57:05 -- common/autotest_common.sh@10 -- # set +x 00:04:58.605 11:57:05 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:04:58.605 11:57:05 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:58.605 11:57:05 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:58.605 11:57:05 -- common/autotest_common.sh@1562 -- # bdfs=() 00:04:58.605 11:57:05 -- common/autotest_common.sh@1562 -- # local bdfs 00:04:58.605 11:57:05 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:58.605 11:57:05 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:58.605 11:57:05 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:58.605 11:57:05 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:58.605 11:57:05 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:58.605 11:57:05 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:58.605 11:57:05 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:58.605 11:57:05 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:58.605 11:57:05 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:04:58.605 11:57:05 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:58.605 11:57:05 -- common/autotest_common.sh@1565 -- # device=0x0b60 00:04:58.605 11:57:05 -- common/autotest_common.sh@1566 -- # [[ 0x0b60 == \0\x\0\a\5\4 ]] 00:04:58.605 11:57:05 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:04:58.605 11:57:05 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:58.605 11:57:05 -- common/autotest_common.sh@1578 -- # return 0 00:04:58.605 11:57:05 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:04:58.605 11:57:05 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:04:58.605 11:57:05 -- spdk/autotest.sh@166 -- # [[ 1 -eq 1 ]] 00:04:58.605 11:57:05 -- spdk/autotest.sh@167 -- # [[ 0 -eq 1 ]] 00:04:58.605 11:57:05 -- spdk/autotest.sh@170 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/qat_setup.sh 00:04:59.172 Restarting all devices. 00:05:02.463 lstat() error: No such file or directory 00:05:02.463 QAT Error: No GENERAL section found 00:05:02.463 Failed to configure qat_dev0 00:05:02.463 lstat() error: No such file or directory 00:05:02.463 QAT Error: No GENERAL section found 00:05:02.463 Failed to configure qat_dev1 00:05:02.463 lstat() error: No such file or directory 00:05:02.463 QAT Error: No GENERAL section found 00:05:02.463 Failed to configure qat_dev2 00:05:02.463 enable sriov 00:05:02.463 Checking status of all devices. 00:05:02.463 There is 3 QAT acceleration device(s) in the system: 00:05:02.463 qat_dev0 - type: c6xx, inst_id: 0, node_id: 0, bsf: 0000:3d:00.0, #accel: 5 #engines: 10 state: down 00:05:02.463 qat_dev1 - type: c6xx, inst_id: 1, node_id: 0, bsf: 0000:3f:00.0, #accel: 5 #engines: 10 state: down 00:05:02.463 qat_dev2 - type: c6xx, inst_id: 2, node_id: 1, bsf: 0000:da:00.0, #accel: 5 #engines: 10 state: down 00:05:03.029 0000:3d:00.0 set to 16 VFs 00:05:03.597 0000:3f:00.0 set to 16 VFs 00:05:04.166 0000:da:00.0 set to 16 VFs 00:05:04.166 Properly configured the qat device with driver uio_pci_generic. 00:05:04.166 11:57:11 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:04.166 11:57:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:04.166 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.166 11:57:11 -- spdk/autotest.sh@175 -- # run_test env /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:04.166 11:57:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.166 11:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.166 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.166 ************************************ 00:05:04.166 START TEST env 00:05:04.166 ************************************ 00:05:04.166 11:57:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env.sh 00:05:04.424 * Looking for test storage... 00:05:04.424 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env 00:05:04.424 11:57:11 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.424 11:57:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.424 11:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.424 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.424 ************************************ 00:05:04.424 START TEST env_memory 00:05:04.424 ************************************ 00:05:04.424 11:57:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/memory/memory_ut 00:05:04.424 00:05:04.424 00:05:04.424 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.424 http://cunit.sourceforge.net/ 00:05:04.424 00:05:04.424 00:05:04.424 Suite: memory 00:05:04.425 Test: alloc and free memory map ...[2024-07-25 11:57:11.590827] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:04.425 passed 00:05:04.425 Test: mem map translation ...[2024-07-25 11:57:11.610634] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:04.425 [2024-07-25 11:57:11.610651] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:04.425 [2024-07-25 11:57:11.610690] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:04.425 [2024-07-25 11:57:11.610698] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:04.425 passed 00:05:04.425 Test: mem map registration ...[2024-07-25 11:57:11.646942] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:04.425 [2024-07-25 11:57:11.646958] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:04.425 passed 00:05:04.425 Test: mem map adjacent registrations ...passed 00:05:04.425 00:05:04.425 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.425 suites 1 1 n/a 0 0 00:05:04.425 tests 4 4 4 0 0 00:05:04.425 asserts 152 152 152 0 n/a 00:05:04.425 00:05:04.425 Elapsed time = 0.133 seconds 00:05:04.425 00:05:04.425 real 0m0.147s 00:05:04.425 user 0m0.136s 00:05:04.425 sys 0m0.011s 00:05:04.425 11:57:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.425 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.425 ************************************ 00:05:04.425 END TEST env_memory 00:05:04.425 ************************************ 00:05:04.425 11:57:11 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.425 11:57:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:04.425 11:57:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:04.425 11:57:11 -- common/autotest_common.sh@10 -- # set +x 00:05:04.682 ************************************ 00:05:04.682 START TEST env_vtophys 00:05:04.682 ************************************ 00:05:04.682 11:57:11 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:04.682 EAL: lib.eal log level changed from notice to debug 00:05:04.682 EAL: Detected lcore 0 as core 0 on socket 0 00:05:04.682 EAL: Detected lcore 1 as core 1 on socket 0 00:05:04.682 EAL: Detected lcore 2 as core 2 on socket 0 00:05:04.682 EAL: Detected lcore 3 as core 3 on socket 0 00:05:04.682 EAL: Detected lcore 4 as core 4 on socket 0 00:05:04.682 EAL: Detected lcore 5 as core 8 on socket 0 00:05:04.682 EAL: Detected lcore 6 as core 9 on socket 0 00:05:04.682 EAL: Detected lcore 7 as core 10 on socket 0 00:05:04.682 EAL: Detected lcore 8 as core 11 on socket 0 00:05:04.682 EAL: Detected lcore 9 as core 16 on socket 0 00:05:04.682 EAL: Detected lcore 10 as core 17 on socket 0 00:05:04.682 EAL: Detected lcore 11 as core 18 on socket 0 00:05:04.682 EAL: Detected lcore 12 as core 19 on socket 0 00:05:04.682 EAL: Detected lcore 13 as core 20 on socket 0 00:05:04.682 EAL: Detected lcore 14 as core 24 on socket 0 00:05:04.682 EAL: Detected lcore 15 as core 25 on socket 0 00:05:04.682 EAL: Detected lcore 16 as core 26 on socket 0 00:05:04.682 EAL: Detected lcore 17 as core 27 on socket 0 00:05:04.682 EAL: Detected lcore 18 as core 0 on socket 1 00:05:04.682 EAL: Detected lcore 19 as core 1 on socket 1 00:05:04.682 EAL: Detected lcore 20 as core 2 on socket 1 00:05:04.682 EAL: Detected lcore 21 as core 3 on socket 1 00:05:04.682 EAL: Detected lcore 22 as core 4 on socket 1 00:05:04.682 EAL: Detected lcore 23 as core 8 on socket 1 00:05:04.682 EAL: Detected lcore 24 as core 9 on socket 1 00:05:04.682 EAL: Detected lcore 25 as core 10 on socket 1 00:05:04.682 EAL: Detected lcore 26 as core 11 on socket 1 00:05:04.682 EAL: Detected lcore 27 as core 16 on socket 1 00:05:04.682 EAL: Detected lcore 28 as core 17 on socket 1 00:05:04.682 EAL: Detected lcore 29 as core 18 on socket 1 00:05:04.682 EAL: Detected lcore 30 as core 19 on socket 1 00:05:04.682 EAL: Detected lcore 31 as core 20 on socket 1 00:05:04.682 EAL: Detected lcore 32 as core 24 on socket 1 00:05:04.682 EAL: Detected lcore 33 as core 25 on socket 1 00:05:04.682 EAL: Detected lcore 34 as core 26 on socket 1 00:05:04.682 EAL: Detected lcore 35 as core 27 on socket 1 00:05:04.682 EAL: Detected lcore 36 as core 0 on socket 0 00:05:04.682 EAL: Detected lcore 37 as core 1 on socket 0 00:05:04.682 EAL: Detected lcore 38 as core 2 on socket 0 00:05:04.682 EAL: Detected lcore 39 as core 3 on socket 0 00:05:04.682 EAL: Detected lcore 40 as core 4 on socket 0 00:05:04.682 EAL: Detected lcore 41 as core 8 on socket 0 00:05:04.682 EAL: Detected lcore 42 as core 9 on socket 0 00:05:04.682 EAL: Detected lcore 43 as core 10 on socket 0 00:05:04.682 EAL: Detected lcore 44 as core 11 on socket 0 00:05:04.682 EAL: Detected lcore 45 as core 16 on socket 0 00:05:04.682 EAL: Detected lcore 46 as core 17 on socket 0 00:05:04.682 EAL: Detected lcore 47 as core 18 on socket 0 00:05:04.682 EAL: Detected lcore 48 as core 19 on socket 0 00:05:04.682 EAL: Detected lcore 49 as core 20 on socket 0 00:05:04.682 EAL: Detected lcore 50 as core 24 on socket 0 00:05:04.682 EAL: Detected lcore 51 as core 25 on socket 0 00:05:04.682 EAL: Detected lcore 52 as core 26 on socket 0 00:05:04.682 EAL: Detected lcore 53 as core 27 on socket 0 00:05:04.682 EAL: Detected lcore 54 as core 0 on socket 1 00:05:04.682 EAL: Detected lcore 55 as core 1 on socket 1 00:05:04.682 EAL: Detected lcore 56 as core 2 on socket 1 00:05:04.682 EAL: Detected lcore 57 as core 3 on socket 1 00:05:04.682 EAL: Detected lcore 58 as core 4 on socket 1 00:05:04.682 EAL: Detected lcore 59 as core 8 on socket 1 00:05:04.682 EAL: Detected lcore 60 as core 9 on socket 1 00:05:04.682 EAL: Detected lcore 61 as core 10 on socket 1 00:05:04.683 EAL: Detected lcore 62 as core 11 on socket 1 00:05:04.683 EAL: Detected lcore 63 as core 16 on socket 1 00:05:04.683 EAL: Detected lcore 64 as core 17 on socket 1 00:05:04.683 EAL: Detected lcore 65 as core 18 on socket 1 00:05:04.683 EAL: Detected lcore 66 as core 19 on socket 1 00:05:04.683 EAL: Detected lcore 67 as core 20 on socket 1 00:05:04.683 EAL: Detected lcore 68 as core 24 on socket 1 00:05:04.683 EAL: Detected lcore 69 as core 25 on socket 1 00:05:04.683 EAL: Detected lcore 70 as core 26 on socket 1 00:05:04.683 EAL: Detected lcore 71 as core 27 on socket 1 00:05:04.683 EAL: Maximum logical cores by configuration: 128 00:05:04.683 EAL: Detected CPU lcores: 72 00:05:04.683 EAL: Detected NUMA nodes: 2 00:05:04.683 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:04.683 EAL: Detected shared linkage of DPDK 00:05:04.683 EAL: No shared files mode enabled, IPC will be disabled 00:05:04.683 EAL: No shared files mode enabled, IPC is disabled 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:01.7 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3d:02.7 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:01.7 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:3f:02.7 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:01.7 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.0 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.1 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.2 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.3 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.4 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.5 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.6 wants IOVA as 'PA' 00:05:04.683 EAL: PCI driver qat for device 0000:da:02.7 wants IOVA as 'PA' 00:05:04.683 EAL: Bus pci wants IOVA as 'PA' 00:05:04.683 EAL: Bus auxiliary wants IOVA as 'DC' 00:05:04.683 EAL: Bus vdev wants IOVA as 'DC' 00:05:04.683 EAL: Selected IOVA mode 'PA' 00:05:04.683 EAL: Probing VFIO support... 00:05:04.683 EAL: IOMMU type 1 (Type 1) is supported 00:05:04.683 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:04.683 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:04.683 EAL: VFIO support initialized 00:05:04.683 EAL: Ask a virtual area of 0x2e000 bytes 00:05:04.683 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:04.683 EAL: Setting up physically contiguous memory... 00:05:04.683 EAL: Setting maximum number of open files to 524288 00:05:04.683 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:04.683 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:04.683 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:04.683 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:04.683 EAL: Ask a virtual area of 0x61000 bytes 00:05:04.683 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:04.683 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:04.683 EAL: Ask a virtual area of 0x400000000 bytes 00:05:04.683 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:04.683 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:04.683 EAL: Hugepages will be freed exactly as allocated. 00:05:04.683 EAL: No shared files mode enabled, IPC is disabled 00:05:04.683 EAL: No shared files mode enabled, IPC is disabled 00:05:04.683 EAL: TSC frequency is ~2300000 KHz 00:05:04.683 EAL: Main lcore 0 is ready (tid=7f48dc735b00;cpuset=[0]) 00:05:04.683 EAL: Trying to obtain current memory policy. 00:05:04.683 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.683 EAL: Restoring previous memory policy: 0 00:05:04.683 EAL: request: mp_malloc_sync 00:05:04.683 EAL: No shared files mode enabled, IPC is disabled 00:05:04.683 EAL: Heap on socket 0 was expanded by 2MB 00:05:04.683 EAL: PCI device 0000:3d:01.0 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x202001000000 00:05:04.683 EAL: PCI memory mapped at 0x202001001000 00:05:04.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:04.683 EAL: PCI device 0000:3d:01.1 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x202001002000 00:05:04.683 EAL: PCI memory mapped at 0x202001003000 00:05:04.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:04.683 EAL: PCI device 0000:3d:01.2 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x202001004000 00:05:04.683 EAL: PCI memory mapped at 0x202001005000 00:05:04.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:04.683 EAL: PCI device 0000:3d:01.3 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x202001006000 00:05:04.683 EAL: PCI memory mapped at 0x202001007000 00:05:04.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:04.683 EAL: PCI device 0000:3d:01.4 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x202001008000 00:05:04.683 EAL: PCI memory mapped at 0x202001009000 00:05:04.683 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:04.683 EAL: PCI device 0000:3d:01.5 on NUMA socket 0 00:05:04.683 EAL: probe driver: 8086:37c9 qat 00:05:04.683 EAL: PCI memory mapped at 0x20200100a000 00:05:04.683 EAL: PCI memory mapped at 0x20200100b000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:01.6 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200100c000 00:05:04.684 EAL: PCI memory mapped at 0x20200100d000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:01.7 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200100e000 00:05:04.684 EAL: PCI memory mapped at 0x20200100f000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.0 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001010000 00:05:04.684 EAL: PCI memory mapped at 0x202001011000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.1 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001012000 00:05:04.684 EAL: PCI memory mapped at 0x202001013000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.2 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001014000 00:05:04.684 EAL: PCI memory mapped at 0x202001015000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.3 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001016000 00:05:04.684 EAL: PCI memory mapped at 0x202001017000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.4 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001018000 00:05:04.684 EAL: PCI memory mapped at 0x202001019000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.5 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200101a000 00:05:04.684 EAL: PCI memory mapped at 0x20200101b000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.6 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200101c000 00:05:04.684 EAL: PCI memory mapped at 0x20200101d000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:04.684 EAL: PCI device 0000:3d:02.7 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200101e000 00:05:04.684 EAL: PCI memory mapped at 0x20200101f000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.0 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001020000 00:05:04.684 EAL: PCI memory mapped at 0x202001021000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.1 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001022000 00:05:04.684 EAL: PCI memory mapped at 0x202001023000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.2 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001024000 00:05:04.684 EAL: PCI memory mapped at 0x202001025000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.3 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001026000 00:05:04.684 EAL: PCI memory mapped at 0x202001027000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.4 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001028000 00:05:04.684 EAL: PCI memory mapped at 0x202001029000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.5 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200102a000 00:05:04.684 EAL: PCI memory mapped at 0x20200102b000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.6 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200102c000 00:05:04.684 EAL: PCI memory mapped at 0x20200102d000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:01.7 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200102e000 00:05:04.684 EAL: PCI memory mapped at 0x20200102f000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.0 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001030000 00:05:04.684 EAL: PCI memory mapped at 0x202001031000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.1 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001032000 00:05:04.684 EAL: PCI memory mapped at 0x202001033000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.2 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001034000 00:05:04.684 EAL: PCI memory mapped at 0x202001035000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.3 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001036000 00:05:04.684 EAL: PCI memory mapped at 0x202001037000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.4 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001038000 00:05:04.684 EAL: PCI memory mapped at 0x202001039000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.5 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200103a000 00:05:04.684 EAL: PCI memory mapped at 0x20200103b000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.6 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200103c000 00:05:04.684 EAL: PCI memory mapped at 0x20200103d000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:04.684 EAL: PCI device 0000:3f:02.7 on NUMA socket 0 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200103e000 00:05:04.684 EAL: PCI memory mapped at 0x20200103f000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:04.684 EAL: PCI device 0000:da:01.0 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001040000 00:05:04.684 EAL: PCI memory mapped at 0x202001041000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:05:04.684 EAL: Trying to obtain current memory policy. 00:05:04.684 EAL: Setting policy MPOL_PREFERRED for socket 1 00:05:04.684 EAL: Restoring previous memory policy: 4 00:05:04.684 EAL: request: mp_malloc_sync 00:05:04.684 EAL: No shared files mode enabled, IPC is disabled 00:05:04.684 EAL: Heap on socket 1 was expanded by 2MB 00:05:04.684 EAL: PCI device 0000:da:01.1 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001042000 00:05:04.684 EAL: PCI memory mapped at 0x202001043000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.2 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001044000 00:05:04.684 EAL: PCI memory mapped at 0x202001045000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.3 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001046000 00:05:04.684 EAL: PCI memory mapped at 0x202001047000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.4 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001048000 00:05:04.684 EAL: PCI memory mapped at 0x202001049000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.5 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200104a000 00:05:04.684 EAL: PCI memory mapped at 0x20200104b000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.6 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200104c000 00:05:04.684 EAL: PCI memory mapped at 0x20200104d000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:01.7 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x20200104e000 00:05:04.684 EAL: PCI memory mapped at 0x20200104f000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:05:04.684 EAL: PCI device 0000:da:02.0 on NUMA socket 1 00:05:04.684 EAL: probe driver: 8086:37c9 qat 00:05:04.684 EAL: PCI memory mapped at 0x202001050000 00:05:04.684 EAL: PCI memory mapped at 0x202001051000 00:05:04.684 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.1 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x202001052000 00:05:04.685 EAL: PCI memory mapped at 0x202001053000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.2 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x202001054000 00:05:04.685 EAL: PCI memory mapped at 0x202001055000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.3 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x202001056000 00:05:04.685 EAL: PCI memory mapped at 0x202001057000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.4 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x202001058000 00:05:04.685 EAL: PCI memory mapped at 0x202001059000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.5 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x20200105a000 00:05:04.685 EAL: PCI memory mapped at 0x20200105b000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.6 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x20200105c000 00:05:04.685 EAL: PCI memory mapped at 0x20200105d000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:05:04.685 EAL: PCI device 0000:da:02.7 on NUMA socket 1 00:05:04.685 EAL: probe driver: 8086:37c9 qat 00:05:04.685 EAL: PCI memory mapped at 0x20200105e000 00:05:04.685 EAL: PCI memory mapped at 0x20200105f000 00:05:04.685 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:04.685 EAL: Mem event callback 'spdk:(nil)' registered 00:05:04.685 00:05:04.685 00:05:04.685 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.685 http://cunit.sourceforge.net/ 00:05:04.685 00:05:04.685 00:05:04.685 Suite: components_suite 00:05:04.685 Test: vtophys_malloc_test ...passed 00:05:04.685 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 4MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 4MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 6MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 6MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 10MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 10MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 18MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 18MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 34MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 34MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 66MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 66MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.685 EAL: Restoring previous memory policy: 4 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was expanded by 130MB 00:05:04.685 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.685 EAL: request: mp_malloc_sync 00:05:04.685 EAL: No shared files mode enabled, IPC is disabled 00:05:04.685 EAL: Heap on socket 0 was shrunk by 130MB 00:05:04.685 EAL: Trying to obtain current memory policy. 00:05:04.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.942 EAL: Restoring previous memory policy: 4 00:05:04.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.942 EAL: request: mp_malloc_sync 00:05:04.942 EAL: No shared files mode enabled, IPC is disabled 00:05:04.942 EAL: Heap on socket 0 was expanded by 258MB 00:05:04.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.942 EAL: request: mp_malloc_sync 00:05:04.942 EAL: No shared files mode enabled, IPC is disabled 00:05:04.942 EAL: Heap on socket 0 was shrunk by 258MB 00:05:04.942 EAL: Trying to obtain current memory policy. 00:05:04.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:04.942 EAL: Restoring previous memory policy: 4 00:05:04.942 EAL: Calling mem event callback 'spdk:(nil)' 00:05:04.942 EAL: request: mp_malloc_sync 00:05:04.942 EAL: No shared files mode enabled, IPC is disabled 00:05:04.942 EAL: Heap on socket 0 was expanded by 514MB 00:05:05.200 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.200 EAL: request: mp_malloc_sync 00:05:05.200 EAL: No shared files mode enabled, IPC is disabled 00:05:05.200 EAL: Heap on socket 0 was shrunk by 514MB 00:05:05.200 EAL: Trying to obtain current memory policy. 00:05:05.200 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:05.458 EAL: Restoring previous memory policy: 4 00:05:05.458 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.458 EAL: request: mp_malloc_sync 00:05:05.458 EAL: No shared files mode enabled, IPC is disabled 00:05:05.458 EAL: Heap on socket 0 was expanded by 1026MB 00:05:05.716 EAL: Calling mem event callback 'spdk:(nil)' 00:05:05.716 EAL: request: mp_malloc_sync 00:05:05.716 EAL: No shared files mode enabled, IPC is disabled 00:05:05.716 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:05.716 passed 00:05:05.716 00:05:05.716 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.716 suites 1 1 n/a 0 0 00:05:05.716 tests 2 2 2 0 0 00:05:05.716 asserts 5806 5806 5806 0 n/a 00:05:05.716 00:05:05.716 Elapsed time = 1.122 seconds 00:05:05.716 EAL: No shared files mode enabled, IPC is disabled 00:05:05.717 EAL: No shared files mode enabled, IPC is disabled 00:05:05.717 EAL: No shared files mode enabled, IPC is disabled 00:05:05.717 00:05:05.717 real 0m1.273s 00:05:05.717 user 0m0.722s 00:05:05.717 sys 0m0.522s 00:05:05.717 11:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.717 11:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.717 ************************************ 00:05:05.717 END TEST env_vtophys 00:05:05.717 ************************************ 00:05:05.976 11:57:13 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.976 11:57:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:05.976 11:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.976 11:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.976 ************************************ 00:05:05.976 START TEST env_pci 00:05:05.976 ************************************ 00:05:05.976 11:57:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/pci/pci_ut 00:05:05.976 00:05:05.976 00:05:05.976 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.976 http://cunit.sourceforge.net/ 00:05:05.976 00:05:05.976 00:05:05.976 Suite: pci 00:05:05.976 Test: pci_hook ...[2024-07-25 11:57:13.073887] /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1176568 has claimed it 00:05:05.976 EAL: Cannot find device (10000:00:01.0) 00:05:05.976 EAL: Failed to attach device on primary process 00:05:05.976 passed 00:05:05.976 00:05:05.976 Run Summary: Type Total Ran Passed Failed Inactive 00:05:05.976 suites 1 1 n/a 0 0 00:05:05.976 tests 1 1 1 0 0 00:05:05.976 asserts 25 25 25 0 n/a 00:05:05.976 00:05:05.976 Elapsed time = 0.035 seconds 00:05:05.976 00:05:05.976 real 0m0.062s 00:05:05.976 user 0m0.025s 00:05:05.976 sys 0m0.037s 00:05:05.976 11:57:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:05.976 11:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.976 ************************************ 00:05:05.976 END TEST env_pci 00:05:05.976 ************************************ 00:05:05.976 11:57:13 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:05.976 11:57:13 -- env/env.sh@15 -- # uname 00:05:05.976 11:57:13 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:05.976 11:57:13 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:05.976 11:57:13 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.976 11:57:13 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:05.976 11:57:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:05.976 11:57:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.976 ************************************ 00:05:05.976 START TEST env_dpdk_post_init 00:05:05.976 ************************************ 00:05:05.976 11:57:13 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:05.976 EAL: Detected CPU lcores: 72 00:05:05.976 EAL: Detected NUMA nodes: 2 00:05:05.976 EAL: Detected shared linkage of DPDK 00:05:05.976 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:05.976 EAL: Selected IOVA mode 'PA' 00:05:05.976 EAL: VFIO support initialized 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.976 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:05:05.976 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.976 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_sym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.977 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_asym 00:05:05.977 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.977 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_sym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:05.978 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_asym 00:05:05.978 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:05.978 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:06.236 EAL: Using IOMMU type 1 (Type 1) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:06.236 EAL: Ignore mapping IO port bar(1) 00:05:06.236 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:06.494 EAL: Probe PCI driver: spdk_nvme (8086:0b60) device: 0000:5e:00.0 (socket 0) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:06.494 EAL: Ignore mapping IO port bar(1) 00:05:06.494 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:07.871 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:07.871 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001080000 00:05:07.871 Starting DPDK initialization... 00:05:07.871 Starting SPDK post initialization... 00:05:07.871 SPDK NVMe probe 00:05:07.871 Attaching to 0000:5e:00.0 00:05:07.871 Attached to 0000:5e:00.0 00:05:07.871 Cleaning up... 00:05:07.871 00:05:07.871 real 0m1.993s 00:05:07.871 user 0m1.245s 00:05:07.871 sys 0m0.336s 00:05:07.871 11:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.871 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:07.871 ************************************ 00:05:07.871 END TEST env_dpdk_post_init 00:05:07.871 ************************************ 00:05:08.132 11:57:15 -- env/env.sh@26 -- # uname 00:05:08.132 11:57:15 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.132 11:57:15 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.132 11:57:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.132 11:57:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.132 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.132 ************************************ 00:05:08.132 START TEST env_mem_callbacks 00:05:08.132 ************************************ 00:05:08.132 11:57:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.132 EAL: Detected CPU lcores: 72 00:05:08.132 EAL: Detected NUMA nodes: 2 00:05:08.132 EAL: Detected shared linkage of DPDK 00:05:08.132 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.132 EAL: Selected IOVA mode 'PA' 00:05:08.132 EAL: VFIO support initialized 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.0 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.0_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.1 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.1_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.2 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.2_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.3 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.3_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.4 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.4_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.5 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.5_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.6 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.6_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:01.7 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:01.7_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.0 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.0_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.1 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.1_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.2 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.2_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.3 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.3_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.4 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.4_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.5 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.5_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.6 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.6_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3d:02.7 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3d:02.7_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3d:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.0 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.0_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.1 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.1_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.2 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.2_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.3 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.3_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.4 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.4_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.5 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.5_qat_asym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.132 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.6 (socket 0) 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_sym 00:05:08.132 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.132 CRYPTODEV: Creating cryptodev 0000:3f:01.6_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:01.7 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:01.7_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:01.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.0 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.0_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.0_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.1 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.1_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.1_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.2 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.2_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.2_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.3 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.3_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.3_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.4 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.4_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.4_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.5 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.5_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.5_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.6 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.6_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.6_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:3f:02.7 (socket 0) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_sym,socket id: 0, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:3f:02.7_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:3f:02.7_qat_asym,socket id: 0, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.0 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.0_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.1 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.1_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.2 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.2_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.3 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.3_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.4 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.4_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.5 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.5_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.6 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.6_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:01.7 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:01.7_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:01.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.0 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.0_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.0_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.1 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.1_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.1_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.2 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.2_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.2_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.3 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.3_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.3_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.4 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.4_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.4_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.5 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.5_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.5_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.6 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.6_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.6_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 EAL: Probe PCI driver: qat (8086:37c9) device: 0000:da:02.7 (socket 1) 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_sym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_sym,socket id: 1, max queue pairs: 0 00:05:08.133 CRYPTODEV: Creating cryptodev 0000:da:02.7_qat_asym 00:05:08.133 CRYPTODEV: Initialisation parameters - name: 0000:da:02.7_qat_asym,socket id: 1, max queue pairs: 0 00:05:08.133 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.133 00:05:08.133 00:05:08.133 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.133 http://cunit.sourceforge.net/ 00:05:08.133 00:05:08.133 00:05:08.133 Suite: memory 00:05:08.133 Test: test ... 00:05:08.133 register 0x200000200000 2097152 00:05:08.133 register 0x201000a00000 2097152 00:05:08.133 malloc 3145728 00:05:08.133 register 0x200000400000 4194304 00:05:08.133 buf 0x200000500000 len 3145728 PASSED 00:05:08.133 malloc 64 00:05:08.133 buf 0x2000004fff40 len 64 PASSED 00:05:08.133 malloc 4194304 00:05:08.133 register 0x200000800000 6291456 00:05:08.133 buf 0x200000a00000 len 4194304 PASSED 00:05:08.133 free 0x200000500000 3145728 00:05:08.133 free 0x2000004fff40 64 00:05:08.133 unregister 0x200000400000 4194304 PASSED 00:05:08.133 free 0x200000a00000 4194304 00:05:08.133 unregister 0x200000800000 6291456 PASSED 00:05:08.134 malloc 8388608 00:05:08.134 register 0x200000400000 10485760 00:05:08.134 buf 0x200000600000 len 8388608 PASSED 00:05:08.134 free 0x200000600000 8388608 00:05:08.134 unregister 0x200000400000 10485760 PASSED 00:05:08.134 passed 00:05:08.134 00:05:08.134 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.134 suites 1 1 n/a 0 0 00:05:08.134 tests 1 1 1 0 0 00:05:08.134 asserts 16 16 16 0 n/a 00:05:08.134 00:05:08.134 Elapsed time = 0.005 seconds 00:05:08.134 00:05:08.134 real 0m0.085s 00:05:08.134 user 0m0.025s 00:05:08.134 sys 0m0.060s 00:05:08.134 11:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.134 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.134 ************************************ 00:05:08.134 END TEST env_mem_callbacks 00:05:08.134 ************************************ 00:05:08.134 00:05:08.134 real 0m3.887s 00:05:08.134 user 0m2.267s 00:05:08.134 sys 0m1.220s 00:05:08.134 11:57:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:08.134 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.134 ************************************ 00:05:08.134 END TEST env 00:05:08.134 ************************************ 00:05:08.134 11:57:15 -- spdk/autotest.sh@176 -- # run_test rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.134 11:57:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:08.134 11:57:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:08.134 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.134 ************************************ 00:05:08.134 START TEST rpc 00:05:08.134 ************************************ 00:05:08.134 11:57:15 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.393 * Looking for test storage... 00:05:08.393 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:08.393 11:57:15 -- rpc/rpc.sh@65 -- # spdk_pid=1177037 00:05:08.393 11:57:15 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:08.393 11:57:15 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:08.393 11:57:15 -- rpc/rpc.sh@67 -- # waitforlisten 1177037 00:05:08.393 11:57:15 -- common/autotest_common.sh@819 -- # '[' -z 1177037 ']' 00:05:08.393 11:57:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:08.393 11:57:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:08.393 11:57:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:08.393 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:08.393 11:57:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:08.393 11:57:15 -- common/autotest_common.sh@10 -- # set +x 00:05:08.393 [2024-07-25 11:57:15.529458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:08.393 [2024-07-25 11:57:15.529513] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177037 ] 00:05:08.393 [2024-07-25 11:57:15.616457] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:08.652 [2024-07-25 11:57:15.703667] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:08.652 [2024-07-25 11:57:15.703776] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:08.652 [2024-07-25 11:57:15.703786] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1177037' to capture a snapshot of events at runtime. 00:05:08.652 [2024-07-25 11:57:15.703796] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1177037 for offline analysis/debug. 00:05:08.652 [2024-07-25 11:57:15.703821] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.221 11:57:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:09.221 11:57:16 -- common/autotest_common.sh@852 -- # return 0 00:05:09.221 11:57:16 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:09.221 11:57:16 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc 00:05:09.221 11:57:16 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.221 11:57:16 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.221 11:57:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.221 11:57:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 ************************************ 00:05:09.221 START TEST rpc_integrity 00:05:09.221 ************************************ 00:05:09.221 11:57:16 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:09.221 11:57:16 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.221 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.221 11:57:16 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.221 11:57:16 -- rpc/rpc.sh@13 -- # jq length 00:05:09.221 11:57:16 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.221 11:57:16 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.221 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.221 11:57:16 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.221 11:57:16 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.221 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.221 11:57:16 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.221 { 00:05:09.221 "name": "Malloc0", 00:05:09.221 "aliases": [ 00:05:09.221 "8ea8eeac-5a05-41b8-8d65-dadd733ba88a" 00:05:09.221 ], 00:05:09.221 "product_name": "Malloc disk", 00:05:09.221 "block_size": 512, 00:05:09.221 "num_blocks": 16384, 00:05:09.221 "uuid": "8ea8eeac-5a05-41b8-8d65-dadd733ba88a", 00:05:09.221 "assigned_rate_limits": { 00:05:09.221 "rw_ios_per_sec": 0, 00:05:09.221 "rw_mbytes_per_sec": 0, 00:05:09.221 "r_mbytes_per_sec": 0, 00:05:09.221 "w_mbytes_per_sec": 0 00:05:09.221 }, 00:05:09.221 "claimed": false, 00:05:09.221 "zoned": false, 00:05:09.221 "supported_io_types": { 00:05:09.221 "read": true, 00:05:09.221 "write": true, 00:05:09.221 "unmap": true, 00:05:09.221 "write_zeroes": true, 00:05:09.221 "flush": true, 00:05:09.221 "reset": true, 00:05:09.221 "compare": false, 00:05:09.221 "compare_and_write": false, 00:05:09.221 "abort": true, 00:05:09.221 "nvme_admin": false, 00:05:09.221 "nvme_io": false 00:05:09.221 }, 00:05:09.221 "memory_domains": [ 00:05:09.221 { 00:05:09.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.221 "dma_device_type": 2 00:05:09.221 } 00:05:09.221 ], 00:05:09.221 "driver_specific": {} 00:05:09.221 } 00:05:09.221 ]' 00:05:09.221 11:57:16 -- rpc/rpc.sh@17 -- # jq length 00:05:09.221 11:57:16 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.221 11:57:16 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.221 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 [2024-07-25 11:57:16.460856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.221 [2024-07-25 11:57:16.460890] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.221 [2024-07-25 11:57:16.460903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x27632e0 00:05:09.221 [2024-07-25 11:57:16.460912] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.221 [2024-07-25 11:57:16.462038] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.221 [2024-07-25 11:57:16.462059] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.221 Passthru0 00:05:09.221 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.221 11:57:16 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.221 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.221 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.221 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.221 11:57:16 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.221 { 00:05:09.221 "name": "Malloc0", 00:05:09.221 "aliases": [ 00:05:09.221 "8ea8eeac-5a05-41b8-8d65-dadd733ba88a" 00:05:09.221 ], 00:05:09.221 "product_name": "Malloc disk", 00:05:09.221 "block_size": 512, 00:05:09.221 "num_blocks": 16384, 00:05:09.221 "uuid": "8ea8eeac-5a05-41b8-8d65-dadd733ba88a", 00:05:09.221 "assigned_rate_limits": { 00:05:09.221 "rw_ios_per_sec": 0, 00:05:09.221 "rw_mbytes_per_sec": 0, 00:05:09.221 "r_mbytes_per_sec": 0, 00:05:09.221 "w_mbytes_per_sec": 0 00:05:09.221 }, 00:05:09.221 "claimed": true, 00:05:09.221 "claim_type": "exclusive_write", 00:05:09.221 "zoned": false, 00:05:09.221 "supported_io_types": { 00:05:09.221 "read": true, 00:05:09.221 "write": true, 00:05:09.221 "unmap": true, 00:05:09.221 "write_zeroes": true, 00:05:09.221 "flush": true, 00:05:09.221 "reset": true, 00:05:09.221 "compare": false, 00:05:09.221 "compare_and_write": false, 00:05:09.221 "abort": true, 00:05:09.221 "nvme_admin": false, 00:05:09.221 "nvme_io": false 00:05:09.221 }, 00:05:09.221 "memory_domains": [ 00:05:09.221 { 00:05:09.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.221 "dma_device_type": 2 00:05:09.221 } 00:05:09.221 ], 00:05:09.221 "driver_specific": {} 00:05:09.221 }, 00:05:09.221 { 00:05:09.221 "name": "Passthru0", 00:05:09.221 "aliases": [ 00:05:09.221 "589230b4-bb7e-5a50-a0ba-5cbd0f55b83d" 00:05:09.221 ], 00:05:09.221 "product_name": "passthru", 00:05:09.221 "block_size": 512, 00:05:09.221 "num_blocks": 16384, 00:05:09.221 "uuid": "589230b4-bb7e-5a50-a0ba-5cbd0f55b83d", 00:05:09.221 "assigned_rate_limits": { 00:05:09.221 "rw_ios_per_sec": 0, 00:05:09.221 "rw_mbytes_per_sec": 0, 00:05:09.221 "r_mbytes_per_sec": 0, 00:05:09.221 "w_mbytes_per_sec": 0 00:05:09.221 }, 00:05:09.221 "claimed": false, 00:05:09.221 "zoned": false, 00:05:09.221 "supported_io_types": { 00:05:09.221 "read": true, 00:05:09.221 "write": true, 00:05:09.221 "unmap": true, 00:05:09.221 "write_zeroes": true, 00:05:09.221 "flush": true, 00:05:09.221 "reset": true, 00:05:09.221 "compare": false, 00:05:09.221 "compare_and_write": false, 00:05:09.221 "abort": true, 00:05:09.221 "nvme_admin": false, 00:05:09.221 "nvme_io": false 00:05:09.221 }, 00:05:09.221 "memory_domains": [ 00:05:09.221 { 00:05:09.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.221 "dma_device_type": 2 00:05:09.221 } 00:05:09.221 ], 00:05:09.221 "driver_specific": { 00:05:09.221 "passthru": { 00:05:09.221 "name": "Passthru0", 00:05:09.221 "base_bdev_name": "Malloc0" 00:05:09.221 } 00:05:09.221 } 00:05:09.221 } 00:05:09.221 ]' 00:05:09.221 11:57:16 -- rpc/rpc.sh@21 -- # jq length 00:05:09.481 11:57:16 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.481 11:57:16 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.481 11:57:16 -- rpc/rpc.sh@26 -- # jq length 00:05:09.481 11:57:16 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.481 00:05:09.481 real 0m0.291s 00:05:09.481 user 0m0.179s 00:05:09.481 sys 0m0.049s 00:05:09.481 11:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 ************************************ 00:05:09.481 END TEST rpc_integrity 00:05:09.481 ************************************ 00:05:09.481 11:57:16 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.481 11:57:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.481 11:57:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 ************************************ 00:05:09.481 START TEST rpc_plugins 00:05:09.481 ************************************ 00:05:09.481 11:57:16 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:09.481 11:57:16 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.481 11:57:16 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.481 { 00:05:09.481 "name": "Malloc1", 00:05:09.481 "aliases": [ 00:05:09.481 "fe4e0174-841c-4416-bb0d-4ebbfc69f851" 00:05:09.481 ], 00:05:09.481 "product_name": "Malloc disk", 00:05:09.481 "block_size": 4096, 00:05:09.481 "num_blocks": 256, 00:05:09.481 "uuid": "fe4e0174-841c-4416-bb0d-4ebbfc69f851", 00:05:09.481 "assigned_rate_limits": { 00:05:09.481 "rw_ios_per_sec": 0, 00:05:09.481 "rw_mbytes_per_sec": 0, 00:05:09.481 "r_mbytes_per_sec": 0, 00:05:09.481 "w_mbytes_per_sec": 0 00:05:09.481 }, 00:05:09.481 "claimed": false, 00:05:09.481 "zoned": false, 00:05:09.481 "supported_io_types": { 00:05:09.481 "read": true, 00:05:09.481 "write": true, 00:05:09.481 "unmap": true, 00:05:09.481 "write_zeroes": true, 00:05:09.481 "flush": true, 00:05:09.481 "reset": true, 00:05:09.481 "compare": false, 00:05:09.481 "compare_and_write": false, 00:05:09.481 "abort": true, 00:05:09.481 "nvme_admin": false, 00:05:09.481 "nvme_io": false 00:05:09.481 }, 00:05:09.481 "memory_domains": [ 00:05:09.481 { 00:05:09.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.481 "dma_device_type": 2 00:05:09.481 } 00:05:09.481 ], 00:05:09.481 "driver_specific": {} 00:05:09.481 } 00:05:09.481 ]' 00:05:09.481 11:57:16 -- rpc/rpc.sh@32 -- # jq length 00:05:09.481 11:57:16 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.481 11:57:16 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.481 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.481 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.481 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.481 11:57:16 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.481 11:57:16 -- rpc/rpc.sh@36 -- # jq length 00:05:09.740 11:57:16 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.740 00:05:09.740 real 0m0.147s 00:05:09.740 user 0m0.092s 00:05:09.740 sys 0m0.020s 00:05:09.740 11:57:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.740 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.740 ************************************ 00:05:09.740 END TEST rpc_plugins 00:05:09.740 ************************************ 00:05:09.740 11:57:16 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:09.740 11:57:16 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.740 11:57:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.740 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.740 ************************************ 00:05:09.740 START TEST rpc_trace_cmd_test 00:05:09.740 ************************************ 00:05:09.740 11:57:16 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:09.740 11:57:16 -- rpc/rpc.sh@40 -- # local info 00:05:09.740 11:57:16 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:09.740 11:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.740 11:57:16 -- common/autotest_common.sh@10 -- # set +x 00:05:09.740 11:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.740 11:57:16 -- rpc/rpc.sh@42 -- # info='{ 00:05:09.740 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1177037", 00:05:09.740 "tpoint_group_mask": "0x8", 00:05:09.740 "iscsi_conn": { 00:05:09.740 "mask": "0x2", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "scsi": { 00:05:09.740 "mask": "0x4", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "bdev": { 00:05:09.740 "mask": "0x8", 00:05:09.740 "tpoint_mask": "0xffffffffffffffff" 00:05:09.740 }, 00:05:09.740 "nvmf_rdma": { 00:05:09.740 "mask": "0x10", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "nvmf_tcp": { 00:05:09.740 "mask": "0x20", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "ftl": { 00:05:09.740 "mask": "0x40", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "blobfs": { 00:05:09.740 "mask": "0x80", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "dsa": { 00:05:09.740 "mask": "0x200", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "thread": { 00:05:09.740 "mask": "0x400", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "nvme_pcie": { 00:05:09.740 "mask": "0x800", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "iaa": { 00:05:09.740 "mask": "0x1000", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "nvme_tcp": { 00:05:09.740 "mask": "0x2000", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 }, 00:05:09.740 "bdev_nvme": { 00:05:09.740 "mask": "0x4000", 00:05:09.740 "tpoint_mask": "0x0" 00:05:09.740 } 00:05:09.740 }' 00:05:09.740 11:57:16 -- rpc/rpc.sh@43 -- # jq length 00:05:09.740 11:57:16 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:09.740 11:57:16 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:09.740 11:57:16 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:09.740 11:57:16 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:09.740 11:57:17 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:09.740 11:57:17 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:09.740 11:57:17 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:09.740 11:57:17 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:09.999 11:57:17 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:09.999 00:05:09.999 real 0m0.232s 00:05:09.999 user 0m0.191s 00:05:09.999 sys 0m0.033s 00:05:09.999 11:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 ************************************ 00:05:09.999 END TEST rpc_trace_cmd_test 00:05:09.999 ************************************ 00:05:09.999 11:57:17 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:09.999 11:57:17 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:09.999 11:57:17 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:09.999 11:57:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:09.999 11:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 ************************************ 00:05:09.999 START TEST rpc_daemon_integrity 00:05:09.999 ************************************ 00:05:09.999 11:57:17 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:09.999 11:57:17 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.999 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.999 11:57:17 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.999 11:57:17 -- rpc/rpc.sh@13 -- # jq length 00:05:09.999 11:57:17 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.999 11:57:17 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.999 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.999 11:57:17 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:09.999 11:57:17 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.999 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.999 11:57:17 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.999 { 00:05:09.999 "name": "Malloc2", 00:05:09.999 "aliases": [ 00:05:09.999 "24978fa7-ac49-485e-a537-621fa000c4a0" 00:05:09.999 ], 00:05:09.999 "product_name": "Malloc disk", 00:05:09.999 "block_size": 512, 00:05:09.999 "num_blocks": 16384, 00:05:09.999 "uuid": "24978fa7-ac49-485e-a537-621fa000c4a0", 00:05:09.999 "assigned_rate_limits": { 00:05:09.999 "rw_ios_per_sec": 0, 00:05:09.999 "rw_mbytes_per_sec": 0, 00:05:09.999 "r_mbytes_per_sec": 0, 00:05:09.999 "w_mbytes_per_sec": 0 00:05:09.999 }, 00:05:09.999 "claimed": false, 00:05:09.999 "zoned": false, 00:05:09.999 "supported_io_types": { 00:05:09.999 "read": true, 00:05:09.999 "write": true, 00:05:09.999 "unmap": true, 00:05:09.999 "write_zeroes": true, 00:05:09.999 "flush": true, 00:05:09.999 "reset": true, 00:05:09.999 "compare": false, 00:05:09.999 "compare_and_write": false, 00:05:09.999 "abort": true, 00:05:09.999 "nvme_admin": false, 00:05:09.999 "nvme_io": false 00:05:09.999 }, 00:05:09.999 "memory_domains": [ 00:05:09.999 { 00:05:09.999 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.999 "dma_device_type": 2 00:05:09.999 } 00:05:09.999 ], 00:05:09.999 "driver_specific": {} 00:05:09.999 } 00:05:09.999 ]' 00:05:09.999 11:57:17 -- rpc/rpc.sh@17 -- # jq length 00:05:09.999 11:57:17 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.999 11:57:17 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:09.999 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:09.999 [2024-07-25 11:57:17.279074] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:09.999 [2024-07-25 11:57:17.279103] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.999 [2024-07-25 11:57:17.279118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28fafd0 00:05:09.999 [2024-07-25 11:57:17.279127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.999 [2024-07-25 11:57:17.280079] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.999 [2024-07-25 11:57:17.280101] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.999 Passthru0 00:05:09.999 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:09.999 11:57:17 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.999 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:09.999 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.258 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.258 11:57:17 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.258 { 00:05:10.258 "name": "Malloc2", 00:05:10.258 "aliases": [ 00:05:10.258 "24978fa7-ac49-485e-a537-621fa000c4a0" 00:05:10.258 ], 00:05:10.258 "product_name": "Malloc disk", 00:05:10.258 "block_size": 512, 00:05:10.258 "num_blocks": 16384, 00:05:10.258 "uuid": "24978fa7-ac49-485e-a537-621fa000c4a0", 00:05:10.258 "assigned_rate_limits": { 00:05:10.258 "rw_ios_per_sec": 0, 00:05:10.258 "rw_mbytes_per_sec": 0, 00:05:10.258 "r_mbytes_per_sec": 0, 00:05:10.258 "w_mbytes_per_sec": 0 00:05:10.258 }, 00:05:10.258 "claimed": true, 00:05:10.258 "claim_type": "exclusive_write", 00:05:10.258 "zoned": false, 00:05:10.258 "supported_io_types": { 00:05:10.258 "read": true, 00:05:10.258 "write": true, 00:05:10.258 "unmap": true, 00:05:10.258 "write_zeroes": true, 00:05:10.258 "flush": true, 00:05:10.258 "reset": true, 00:05:10.258 "compare": false, 00:05:10.258 "compare_and_write": false, 00:05:10.258 "abort": true, 00:05:10.258 "nvme_admin": false, 00:05:10.258 "nvme_io": false 00:05:10.258 }, 00:05:10.258 "memory_domains": [ 00:05:10.258 { 00:05:10.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.258 "dma_device_type": 2 00:05:10.258 } 00:05:10.258 ], 00:05:10.258 "driver_specific": {} 00:05:10.258 }, 00:05:10.258 { 00:05:10.258 "name": "Passthru0", 00:05:10.258 "aliases": [ 00:05:10.258 "0a467590-7d0e-58c2-96b8-d52419354cd0" 00:05:10.258 ], 00:05:10.258 "product_name": "passthru", 00:05:10.258 "block_size": 512, 00:05:10.258 "num_blocks": 16384, 00:05:10.258 "uuid": "0a467590-7d0e-58c2-96b8-d52419354cd0", 00:05:10.258 "assigned_rate_limits": { 00:05:10.258 "rw_ios_per_sec": 0, 00:05:10.258 "rw_mbytes_per_sec": 0, 00:05:10.258 "r_mbytes_per_sec": 0, 00:05:10.258 "w_mbytes_per_sec": 0 00:05:10.258 }, 00:05:10.258 "claimed": false, 00:05:10.258 "zoned": false, 00:05:10.258 "supported_io_types": { 00:05:10.258 "read": true, 00:05:10.258 "write": true, 00:05:10.258 "unmap": true, 00:05:10.258 "write_zeroes": true, 00:05:10.258 "flush": true, 00:05:10.258 "reset": true, 00:05:10.258 "compare": false, 00:05:10.258 "compare_and_write": false, 00:05:10.258 "abort": true, 00:05:10.258 "nvme_admin": false, 00:05:10.258 "nvme_io": false 00:05:10.258 }, 00:05:10.258 "memory_domains": [ 00:05:10.258 { 00:05:10.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.258 "dma_device_type": 2 00:05:10.258 } 00:05:10.258 ], 00:05:10.258 "driver_specific": { 00:05:10.258 "passthru": { 00:05:10.258 "name": "Passthru0", 00:05:10.258 "base_bdev_name": "Malloc2" 00:05:10.258 } 00:05:10.258 } 00:05:10.258 } 00:05:10.258 ]' 00:05:10.258 11:57:17 -- rpc/rpc.sh@21 -- # jq length 00:05:10.258 11:57:17 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.258 11:57:17 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.258 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:10.258 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.258 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.258 11:57:17 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.258 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:10.258 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.258 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.258 11:57:17 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.258 11:57:17 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:10.258 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.258 11:57:17 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:10.258 11:57:17 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.258 11:57:17 -- rpc/rpc.sh@26 -- # jq length 00:05:10.258 11:57:17 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.258 00:05:10.258 real 0m0.288s 00:05:10.258 user 0m0.168s 00:05:10.258 sys 0m0.056s 00:05:10.258 11:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.258 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.258 ************************************ 00:05:10.258 END TEST rpc_daemon_integrity 00:05:10.258 ************************************ 00:05:10.258 11:57:17 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.258 11:57:17 -- rpc/rpc.sh@84 -- # killprocess 1177037 00:05:10.258 11:57:17 -- common/autotest_common.sh@926 -- # '[' -z 1177037 ']' 00:05:10.258 11:57:17 -- common/autotest_common.sh@930 -- # kill -0 1177037 00:05:10.258 11:57:17 -- common/autotest_common.sh@931 -- # uname 00:05:10.258 11:57:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:10.258 11:57:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1177037 00:05:10.258 11:57:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:10.258 11:57:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:10.258 11:57:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1177037' 00:05:10.258 killing process with pid 1177037 00:05:10.258 11:57:17 -- common/autotest_common.sh@945 -- # kill 1177037 00:05:10.258 11:57:17 -- common/autotest_common.sh@950 -- # wait 1177037 00:05:10.828 00:05:10.828 real 0m2.522s 00:05:10.828 user 0m3.096s 00:05:10.828 sys 0m0.808s 00:05:10.828 11:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.828 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 END TEST rpc 00:05:10.828 ************************************ 00:05:10.828 11:57:17 -- spdk/autotest.sh@177 -- # run_test rpc_client /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.828 11:57:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.828 11:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.828 11:57:17 -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 START TEST rpc_client 00:05:10.828 ************************************ 00:05:10.828 11:57:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.828 * Looking for test storage... 00:05:10.828 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client 00:05:10.828 11:57:18 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.828 OK 00:05:10.828 11:57:18 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.828 00:05:10.828 real 0m0.123s 00:05:10.828 user 0m0.050s 00:05:10.828 sys 0m0.082s 00:05:10.828 11:57:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.828 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 END TEST rpc_client 00:05:10.828 ************************************ 00:05:10.828 11:57:18 -- spdk/autotest.sh@178 -- # run_test json_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.828 11:57:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:10.828 11:57:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:10.828 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:10.828 ************************************ 00:05:10.828 START TEST json_config 00:05:10.828 ************************************ 00:05:10.828 11:57:18 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config.sh 00:05:11.087 11:57:18 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:11.087 11:57:18 -- nvmf/common.sh@7 -- # uname -s 00:05:11.087 11:57:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:11.087 11:57:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:11.087 11:57:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:11.087 11:57:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:11.087 11:57:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:11.087 11:57:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:11.087 11:57:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:11.088 11:57:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:11.088 11:57:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:11.088 11:57:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:11.088 11:57:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d40ca9-2a78-e711-906e-0017a4403562 00:05:11.088 11:57:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d40ca9-2a78-e711-906e-0017a4403562 00:05:11.088 11:57:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:11.088 11:57:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:11.088 11:57:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:11.088 11:57:18 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:11.088 11:57:18 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:11.088 11:57:18 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:11.088 11:57:18 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:11.088 11:57:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.088 11:57:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.088 11:57:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.088 11:57:18 -- paths/export.sh@5 -- # export PATH 00:05:11.088 11:57:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:11.088 11:57:18 -- nvmf/common.sh@46 -- # : 0 00:05:11.088 11:57:18 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:11.088 11:57:18 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:11.088 11:57:18 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:11.088 11:57:18 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:11.088 11:57:18 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:11.088 11:57:18 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:11.088 11:57:18 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:11.088 11:57:18 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:11.088 11:57:18 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:11.088 11:57:18 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:11.088 11:57:18 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:11.088 11:57:18 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:11.088 11:57:18 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:11.088 11:57:18 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:11.088 11:57:18 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:11.088 11:57:18 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json') 00:05:11.088 11:57:18 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:11.088 11:57:18 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:11.088 11:57:18 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:11.088 11:57:18 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:11.088 INFO: JSON configuration test init 00:05:11.088 11:57:18 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:11.088 11:57:18 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:11.088 11:57:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.088 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.088 11:57:18 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:11.088 11:57:18 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.088 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.088 11:57:18 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:11.088 11:57:18 -- json_config/json_config.sh@98 -- # local app=target 00:05:11.088 11:57:18 -- json_config/json_config.sh@99 -- # shift 00:05:11.088 11:57:18 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:11.088 11:57:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:11.088 11:57:18 -- json_config/json_config.sh@111 -- # app_pid[$app]=1177601 00:05:11.088 11:57:18 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:11.088 Waiting for target to run... 00:05:11.088 11:57:18 -- json_config/json_config.sh@114 -- # waitforlisten 1177601 /var/tmp/spdk_tgt.sock 00:05:11.088 11:57:18 -- common/autotest_common.sh@819 -- # '[' -z 1177601 ']' 00:05:11.088 11:57:18 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:11.088 11:57:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:11.088 11:57:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:11.088 11:57:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:11.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:11.088 11:57:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:11.088 11:57:18 -- common/autotest_common.sh@10 -- # set +x 00:05:11.088 [2024-07-25 11:57:18.288792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:11.088 [2024-07-25 11:57:18.288852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177601 ] 00:05:11.656 [2024-07-25 11:57:18.829596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.656 [2024-07-25 11:57:18.923229] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:11.656 [2024-07-25 11:57:18.923364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.915 11:57:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:11.915 11:57:19 -- common/autotest_common.sh@852 -- # return 0 00:05:11.915 11:57:19 -- json_config/json_config.sh@115 -- # echo '' 00:05:11.915 00:05:11.915 11:57:19 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:11.915 11:57:19 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:11.915 11:57:19 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:11.915 11:57:19 -- common/autotest_common.sh@10 -- # set +x 00:05:11.915 11:57:19 -- json_config/json_config.sh@148 -- # [[ 1 -eq 1 ]] 00:05:11.915 11:57:19 -- json_config/json_config.sh@149 -- # tgt_rpc dpdk_cryptodev_scan_accel_module 00:05:11.915 11:57:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock dpdk_cryptodev_scan_accel_module 00:05:12.174 11:57:19 -- json_config/json_config.sh@150 -- # tgt_rpc accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:12.174 11:57:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o encrypt -m dpdk_cryptodev 00:05:12.174 [2024-07-25 11:57:19.408849] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:12.174 11:57:19 -- json_config/json_config.sh@151 -- # tgt_rpc accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:12.174 11:57:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock accel_assign_opc -o decrypt -m dpdk_cryptodev 00:05:12.433 [2024-07-25 11:57:19.577267] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:12.433 11:57:19 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:12.433 11:57:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:12.433 11:57:19 -- common/autotest_common.sh@10 -- # set +x 00:05:12.433 11:57:19 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:12.433 11:57:19 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:12.433 11:57:19 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:12.691 [2024-07-25 11:57:19.816862] accel_dpdk_cryptodev.c:1158:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:15.239 11:57:22 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:15.239 11:57:22 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:15.239 11:57:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.239 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.239 11:57:22 -- json_config/json_config.sh@48 -- # local ret=0 00:05:15.239 11:57:22 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:15.239 11:57:22 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:15.239 11:57:22 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:15.239 11:57:22 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:15.239 11:57:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:15.498 11:57:22 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:15.498 11:57:22 -- json_config/json_config.sh@51 -- # local get_types 00:05:15.498 11:57:22 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:15.498 11:57:22 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:15.498 11:57:22 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:15.498 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.498 11:57:22 -- json_config/json_config.sh@58 -- # return 0 00:05:15.498 11:57:22 -- json_config/json_config.sh@331 -- # [[ 1 -eq 1 ]] 00:05:15.498 11:57:22 -- json_config/json_config.sh@332 -- # create_bdev_subsystem_config 00:05:15.498 11:57:22 -- json_config/json_config.sh@158 -- # timing_enter create_bdev_subsystem_config 00:05:15.498 11:57:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:15.498 11:57:22 -- common/autotest_common.sh@10 -- # set +x 00:05:15.498 11:57:22 -- json_config/json_config.sh@160 -- # expected_notifications=() 00:05:15.498 11:57:22 -- json_config/json_config.sh@160 -- # local expected_notifications 00:05:15.498 11:57:22 -- json_config/json_config.sh@164 -- # expected_notifications+=($(get_notifications)) 00:05:15.498 11:57:22 -- json_config/json_config.sh@164 -- # get_notifications 00:05:15.498 11:57:22 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:05:15.498 11:57:22 -- json_config/json_config.sh@64 -- # IFS=: 00:05:15.498 11:57:22 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:15.498 11:57:22 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:05:15.498 11:57:22 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:15.498 11:57:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:15.757 11:57:22 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:05:15.757 11:57:22 -- json_config/json_config.sh@64 -- # IFS=: 00:05:15.757 11:57:22 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:15.757 11:57:22 -- json_config/json_config.sh@166 -- # [[ 1 -eq 1 ]] 00:05:15.757 11:57:22 -- json_config/json_config.sh@167 -- # local lvol_store_base_bdev=Nvme0n1 00:05:15.757 11:57:22 -- json_config/json_config.sh@169 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:05:15.757 11:57:22 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:05:15.757 Nvme0n1p0 Nvme0n1p1 00:05:15.757 11:57:23 -- json_config/json_config.sh@170 -- # tgt_rpc bdev_split_create Malloc0 3 00:05:15.757 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:05:16.016 [2024-07-25 11:57:23.208027] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:16.016 [2024-07-25 11:57:23.208076] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:16.016 00:05:16.016 11:57:23 -- json_config/json_config.sh@171 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:05:16.016 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:05:16.274 Malloc3 00:05:16.274 11:57:23 -- json_config/json_config.sh@172 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:16.274 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:05:16.274 [2024-07-25 11:57:23.532912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:16.274 [2024-07-25 11:57:23.532956] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:16.274 [2024-07-25 11:57:23.532989] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe60670 00:05:16.274 [2024-07-25 11:57:23.532998] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:16.274 [2024-07-25 11:57:23.534201] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:16.274 [2024-07-25 11:57:23.534228] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:16.274 PTBdevFromMalloc3 00:05:16.274 11:57:23 -- json_config/json_config.sh@174 -- # tgt_rpc bdev_null_create Null0 32 512 00:05:16.274 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:05:16.532 Null0 00:05:16.532 11:57:23 -- json_config/json_config.sh@176 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:05:16.532 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:05:16.791 Malloc0 00:05:16.791 11:57:23 -- json_config/json_config.sh@177 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:05:16.791 11:57:23 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:05:16.791 Malloc1 00:05:16.791 11:57:24 -- json_config/json_config.sh@190 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:05:16.791 11:57:24 -- json_config/json_config.sh@193 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:05:17.050 102400+0 records in 00:05:17.050 102400+0 records out 00:05:17.050 104857600 bytes (105 MB, 100 MiB) copied, 0.20259 s, 518 MB/s 00:05:17.050 11:57:24 -- json_config/json_config.sh@194 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:05:17.050 11:57:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:05:17.310 aio_disk 00:05:17.310 11:57:24 -- json_config/json_config.sh@195 -- # expected_notifications+=(bdev_register:aio_disk) 00:05:17.310 11:57:24 -- json_config/json_config.sh@200 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:17.310 11:57:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:05:17.310 0134adeb-c586-4d01-bbf0-b5a58bbe4f4c 00:05:17.310 11:57:24 -- json_config/json_config.sh@207 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:05:17.310 11:57:24 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:05:17.310 11:57:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:05:17.569 11:57:24 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:05:17.569 11:57:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:05:17.828 11:57:24 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:17.828 11:57:24 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:05:17.828 11:57:25 -- json_config/json_config.sh@207 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:17.828 11:57:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:05:18.087 11:57:25 -- json_config/json_config.sh@210 -- # [[ 1 -eq 1 ]] 00:05:18.087 11:57:25 -- json_config/json_config.sh@211 -- # tgt_rpc bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:18.087 11:57:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 1024 --name MallocForCryptoBdev 00:05:18.346 MallocForCryptoBdev 00:05:18.346 11:57:25 -- json_config/json_config.sh@212 -- # lspci -d:37c8 00:05:18.346 11:57:25 -- json_config/json_config.sh@212 -- # wc -l 00:05:18.346 11:57:25 -- json_config/json_config.sh@212 -- # [[ 3 -eq 0 ]] 00:05:18.346 11:57:25 -- json_config/json_config.sh@215 -- # local crypto_driver=crypto_qat 00:05:18.346 11:57:25 -- json_config/json_config.sh@218 -- # tgt_rpc bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:05:18.346 11:57:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_crypto_create MallocForCryptoBdev CryptoMallocBdev -p crypto_qat -k 01234567891234560123456789123456 00:05:18.346 [2024-07-25 11:57:25.622924] vbdev_crypto_rpc.c: 136:rpc_bdev_crypto_create: *WARNING*: "crypto_pmd" parameters is obsolete and ignored 00:05:18.346 CryptoMallocBdev 00:05:18.346 11:57:25 -- json_config/json_config.sh@222 -- # expected_notifications+=(bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev) 00:05:18.346 11:57:25 -- json_config/json_config.sh@225 -- # [[ 0 -eq 1 ]] 00:05:18.346 11:57:25 -- json_config/json_config.sh@231 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:05:18.346 11:57:25 -- json_config/json_config.sh@70 -- # local events_to_check 00:05:18.346 11:57:25 -- json_config/json_config.sh@71 -- # local recorded_events 00:05:18.346 11:57:25 -- json_config/json_config.sh@74 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:05:18.346 11:57:25 -- json_config/json_config.sh@74 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 bdev_register:MallocForCryptoBdev bdev_register:CryptoMallocBdev 00:05:18.346 11:57:25 -- json_config/json_config.sh@74 -- # sort 00:05:18.346 11:57:25 -- json_config/json_config.sh@75 -- # recorded_events=($(get_notifications | sort)) 00:05:18.606 11:57:25 -- json_config/json_config.sh@75 -- # get_notifications 00:05:18.606 11:57:25 -- json_config/json_config.sh@75 -- # sort 00:05:18.606 11:57:25 -- json_config/json_config.sh@62 -- # local ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@61 -- # tgt_rpc notify_get_notifications -i 0 00:05:18.606 11:57:25 -- json_config/json_config.sh@61 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:05:18.606 11:57:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p1 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Nvme0n1p0 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc3 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:PTBdevFromMalloc3 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Null0 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p2 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p1 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc0p0 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:Malloc1 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:aio_disk 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:MallocForCryptoBdev 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@65 -- # echo bdev_register:CryptoMallocBdev 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # IFS=: 00:05:18.606 11:57:25 -- json_config/json_config.sh@64 -- # read -r ev_type ev_ctx event_id 00:05:18.606 11:57:25 -- json_config/json_config.sh@77 -- # [[ bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\2\f\3\f\7\2\6\a\-\9\a\7\a\-\4\f\7\4\-\8\d\c\b\-\8\1\e\c\5\7\3\e\d\a\1\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\6\2\e\7\e\e\b\-\c\c\6\f\-\4\e\5\7\-\8\7\c\b\-\1\1\c\7\b\f\4\a\2\8\b\b\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\4\7\d\7\7\3\8\6\-\2\8\a\8\-\4\f\f\3\-\9\c\2\f\-\f\2\9\8\5\5\9\5\0\e\4\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\C\r\y\p\t\o\M\a\l\l\o\c\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\f\9\2\7\1\e\3\d\-\6\1\7\e\-\4\5\f\4\-\8\f\d\a\-\8\7\f\e\7\7\2\8\6\d\b\d\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\F\o\r\C\r\y\p\t\o\B\d\e\v\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3 ]] 00:05:18.606 11:57:25 -- json_config/json_config.sh@89 -- # cat 00:05:18.606 11:57:25 -- json_config/json_config.sh@89 -- # printf ' %s\n' bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 bdev_register:aio_disk bdev_register:CryptoMallocBdev bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:MallocForCryptoBdev bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 00:05:18.606 Expected events matched: 00:05:18.606 bdev_register:2f3f726a-9a7a-4f74-8dcb-81ec573eda12 00:05:18.606 bdev_register:462e7eeb-cc6f-4e57-87cb-11c7bf4a28bb 00:05:18.606 bdev_register:47d77386-28a8-4ff3-9c2f-f29855950e42 00:05:18.606 bdev_register:aio_disk 00:05:18.606 bdev_register:CryptoMallocBdev 00:05:18.606 bdev_register:f9271e3d-617e-45f4-8fda-87fe77286dbd 00:05:18.606 bdev_register:Malloc0 00:05:18.606 bdev_register:Malloc0p0 00:05:18.606 bdev_register:Malloc0p1 00:05:18.606 bdev_register:Malloc0p2 00:05:18.606 bdev_register:Malloc1 00:05:18.606 bdev_register:Malloc3 00:05:18.606 bdev_register:MallocForCryptoBdev 00:05:18.606 bdev_register:Null0 00:05:18.606 bdev_register:Nvme0n1 00:05:18.606 bdev_register:Nvme0n1p0 00:05:18.606 bdev_register:Nvme0n1p1 00:05:18.606 bdev_register:PTBdevFromMalloc3 00:05:18.606 11:57:25 -- json_config/json_config.sh@233 -- # timing_exit create_bdev_subsystem_config 00:05:18.606 11:57:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.606 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.606 11:57:25 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:18.606 11:57:25 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:18.606 11:57:25 -- json_config/json_config.sh@343 -- # [[ 0 -eq 1 ]] 00:05:18.606 11:57:25 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:18.606 11:57:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.606 11:57:25 -- common/autotest_common.sh@10 -- # set +x 00:05:18.866 11:57:25 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:18.866 11:57:25 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.866 11:57:25 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:18.866 MallocBdevForConfigChangeCheck 00:05:18.866 11:57:26 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:18.866 11:57:26 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:18.866 11:57:26 -- common/autotest_common.sh@10 -- # set +x 00:05:18.866 11:57:26 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:18.866 11:57:26 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:19.125 11:57:26 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:19.125 INFO: shutting down applications... 00:05:19.125 11:57:26 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:19.125 11:57:26 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:19.125 11:57:26 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:19.125 11:57:26 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:19.384 [2024-07-25 11:57:26.609767] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:05:21.289 Calling clear_iscsi_subsystem 00:05:21.289 Calling clear_nvmf_subsystem 00:05:21.289 Calling clear_nbd_subsystem 00:05:21.289 Calling clear_ublk_subsystem 00:05:21.289 Calling clear_vhost_blk_subsystem 00:05:21.289 Calling clear_vhost_scsi_subsystem 00:05:21.289 Calling clear_scheduler_subsystem 00:05:21.289 Calling clear_bdev_subsystem 00:05:21.289 Calling clear_accel_subsystem 00:05:21.289 Calling clear_vmd_subsystem 00:05:21.289 Calling clear_sock_subsystem 00:05:21.289 Calling clear_iobuf_subsystem 00:05:21.289 11:57:28 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py 00:05:21.289 11:57:28 -- json_config/json_config.sh@396 -- # count=100 00:05:21.289 11:57:28 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:21.289 11:57:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:21.289 11:57:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:21.289 11:57:28 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:21.289 11:57:28 -- json_config/json_config.sh@398 -- # break 00:05:21.289 11:57:28 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:21.289 11:57:28 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:21.289 11:57:28 -- json_config/json_config.sh@120 -- # local app=target 00:05:21.289 11:57:28 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:21.289 11:57:28 -- json_config/json_config.sh@124 -- # [[ -n 1177601 ]] 00:05:21.289 11:57:28 -- json_config/json_config.sh@127 -- # kill -SIGINT 1177601 00:05:21.289 11:57:28 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:21.289 11:57:28 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:21.289 11:57:28 -- json_config/json_config.sh@130 -- # kill -0 1177601 00:05:21.289 11:57:28 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:21.859 11:57:29 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:21.859 11:57:29 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:21.859 11:57:29 -- json_config/json_config.sh@130 -- # kill -0 1177601 00:05:21.859 11:57:29 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:21.859 11:57:29 -- json_config/json_config.sh@132 -- # break 00:05:21.859 11:57:29 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:21.859 11:57:29 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:21.859 SPDK target shutdown done 00:05:21.859 11:57:29 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:21.859 INFO: relaunching applications... 00:05:21.859 11:57:29 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.859 11:57:29 -- json_config/json_config.sh@98 -- # local app=target 00:05:21.859 11:57:29 -- json_config/json_config.sh@99 -- # shift 00:05:21.859 11:57:29 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:21.859 11:57:29 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:21.859 11:57:29 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:21.859 11:57:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:21.859 11:57:29 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:21.859 11:57:29 -- json_config/json_config.sh@111 -- # app_pid[$app]=1179203 00:05:21.859 11:57:29 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:21.859 Waiting for target to run... 00:05:21.859 11:57:29 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:21.859 11:57:29 -- json_config/json_config.sh@114 -- # waitforlisten 1179203 /var/tmp/spdk_tgt.sock 00:05:21.859 11:57:29 -- common/autotest_common.sh@819 -- # '[' -z 1179203 ']' 00:05:21.859 11:57:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:21.859 11:57:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:21.859 11:57:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:21.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:21.859 11:57:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:21.859 11:57:29 -- common/autotest_common.sh@10 -- # set +x 00:05:21.859 [2024-07-25 11:57:29.065729] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:21.859 [2024-07-25 11:57:29.065793] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1179203 ] 00:05:22.427 [2024-07-25 11:57:29.621455] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.427 [2024-07-25 11:57:29.705449] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:22.427 [2024-07-25 11:57:29.705561] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.687 [2024-07-25 11:57:29.750984] accel_dpdk_cryptodev.c: 218:accel_dpdk_cryptodev_set_driver: *NOTICE*: Using driver crypto_aesni_mb 00:05:22.687 [2024-07-25 11:57:29.759014] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation encrypt will be assigned to module dpdk_cryptodev 00:05:22.687 [2024-07-25 11:57:29.767030] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation decrypt will be assigned to module dpdk_cryptodev 00:05:22.687 [2024-07-25 11:57:29.846259] accel_dpdk_cryptodev.c:1158:accel_dpdk_cryptodev_init: *NOTICE*: Found crypto devices: 97 00:05:25.221 [2024-07-25 11:57:32.159771] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.221 [2024-07-25 11:57:32.159823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:25.221 [2024-07-25 11:57:32.159833] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:25.221 [2024-07-25 11:57:32.167791] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:05:25.221 [2024-07-25 11:57:32.167810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:05:25.221 [2024-07-25 11:57:32.175801] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:25.221 [2024-07-25 11:57:32.175816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:05:25.221 [2024-07-25 11:57:32.183829] vbdev_crypto_rpc.c: 115:rpc_bdev_crypto_create: *NOTICE*: Found key "CryptoMallocBdev_AES_CBC" 00:05:25.221 [2024-07-25 11:57:32.183847] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: MallocForCryptoBdev 00:05:25.221 [2024-07-25 11:57:32.183855] vbdev_crypto.c: 618:create_crypto_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:25.221 [2024-07-25 11:57:32.530300] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:25.221 [2024-07-25 11:57:32.530335] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:25.221 [2024-07-25 11:57:32.530348] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1612350 00:05:25.221 [2024-07-25 11:57:32.530357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:25.221 [2024-07-25 11:57:32.530570] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:25.221 [2024-07-25 11:57:32.530583] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:05:26.168 11:57:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:26.168 11:57:33 -- common/autotest_common.sh@852 -- # return 0 00:05:26.168 11:57:33 -- json_config/json_config.sh@115 -- # echo '' 00:05:26.168 00:05:26.168 11:57:33 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:26.168 11:57:33 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:26.168 INFO: Checking if target configuration is the same... 00:05:26.168 11:57:33 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.168 11:57:33 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:26.168 11:57:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.168 + '[' 2 -ne 2 ']' 00:05:26.168 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:26.168 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:05:26.168 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:26.168 +++ basename /dev/fd/62 00:05:26.168 ++ mktemp /tmp/62.XXX 00:05:26.168 + tmp_file_1=/tmp/62.vX3 00:05:26.168 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.168 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.168 + tmp_file_2=/tmp/spdk_tgt_config.json.O2i 00:05:26.168 + ret=0 00:05:26.168 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.168 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.436 + diff -u /tmp/62.vX3 /tmp/spdk_tgt_config.json.O2i 00:05:26.436 + echo 'INFO: JSON config files are the same' 00:05:26.436 INFO: JSON config files are the same 00:05:26.436 + rm /tmp/62.vX3 /tmp/spdk_tgt_config.json.O2i 00:05:26.436 + exit 0 00:05:26.436 11:57:33 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:26.436 11:57:33 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:26.436 INFO: changing configuration and checking if this can be detected... 00:05:26.436 11:57:33 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:26.436 11:57:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:26.436 11:57:33 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.436 11:57:33 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:26.436 11:57:33 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:26.436 + '[' 2 -ne 2 ']' 00:05:26.436 +++ dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:26.436 ++ readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/../.. 00:05:26.436 + rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:05:26.436 +++ basename /dev/fd/62 00:05:26.436 ++ mktemp /tmp/62.XXX 00:05:26.436 + tmp_file_1=/tmp/62.BLG 00:05:26.436 +++ basename /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.436 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:26.436 + tmp_file_2=/tmp/spdk_tgt_config.json.yBU 00:05:26.436 + ret=0 00:05:26.436 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.696 + /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:26.954 + diff -u /tmp/62.BLG /tmp/spdk_tgt_config.json.yBU 00:05:26.954 + ret=1 00:05:26.955 + echo '=== Start of file: /tmp/62.BLG ===' 00:05:26.955 + cat /tmp/62.BLG 00:05:26.955 + echo '=== End of file: /tmp/62.BLG ===' 00:05:26.955 + echo '' 00:05:26.955 + echo '=== Start of file: /tmp/spdk_tgt_config.json.yBU ===' 00:05:26.955 + cat /tmp/spdk_tgt_config.json.yBU 00:05:26.955 + echo '=== End of file: /tmp/spdk_tgt_config.json.yBU ===' 00:05:26.955 + echo '' 00:05:26.955 + rm /tmp/62.BLG /tmp/spdk_tgt_config.json.yBU 00:05:26.955 + exit 1 00:05:26.955 11:57:34 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:26.955 INFO: configuration change detected. 00:05:26.955 11:57:34 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:26.955 11:57:34 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:26.955 11:57:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:26.955 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.955 11:57:34 -- json_config/json_config.sh@360 -- # local ret=0 00:05:26.955 11:57:34 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:26.955 11:57:34 -- json_config/json_config.sh@370 -- # [[ -n 1179203 ]] 00:05:26.955 11:57:34 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:26.955 11:57:34 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:26.955 11:57:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:26.955 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:05:26.955 11:57:34 -- json_config/json_config.sh@239 -- # [[ 1 -eq 1 ]] 00:05:26.955 11:57:34 -- json_config/json_config.sh@240 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:05:26.955 11:57:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:05:26.955 11:57:34 -- json_config/json_config.sh@241 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:05:26.955 11:57:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:05:27.214 11:57:34 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:05:27.214 11:57:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:05:27.473 11:57:34 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:05:27.473 11:57:34 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:05:27.473 11:57:34 -- json_config/json_config.sh@246 -- # uname -s 00:05:27.473 11:57:34 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:27.473 11:57:34 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:27.473 11:57:34 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:27.473 11:57:34 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:27.473 11:57:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:27.473 11:57:34 -- common/autotest_common.sh@10 -- # set +x 00:05:27.473 11:57:34 -- json_config/json_config.sh@376 -- # killprocess 1179203 00:05:27.473 11:57:34 -- common/autotest_common.sh@926 -- # '[' -z 1179203 ']' 00:05:27.473 11:57:34 -- common/autotest_common.sh@930 -- # kill -0 1179203 00:05:27.473 11:57:34 -- common/autotest_common.sh@931 -- # uname 00:05:27.473 11:57:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:27.473 11:57:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1179203 00:05:27.732 11:57:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:27.732 11:57:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:27.732 11:57:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1179203' 00:05:27.732 killing process with pid 1179203 00:05:27.732 11:57:34 -- common/autotest_common.sh@945 -- # kill 1179203 00:05:27.732 11:57:34 -- common/autotest_common.sh@950 -- # wait 1179203 00:05:29.635 11:57:36 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/crypto-phy-autotest/spdk/spdk_tgt_config.json 00:05:29.635 11:57:36 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:29.635 11:57:36 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:29.635 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.635 11:57:36 -- json_config/json_config.sh@381 -- # return 0 00:05:29.635 11:57:36 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:29.635 INFO: Success 00:05:29.635 00:05:29.635 real 0m18.630s 00:05:29.635 user 0m22.194s 00:05:29.635 sys 0m3.892s 00:05:29.635 11:57:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.635 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.635 ************************************ 00:05:29.635 END TEST json_config 00:05:29.635 ************************************ 00:05:29.635 11:57:36 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:29.635 11:57:36 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:29.635 11:57:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:29.635 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.635 ************************************ 00:05:29.635 START TEST json_config_extra_key 00:05:29.635 ************************************ 00:05:29.635 11:57:36 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:29.635 11:57:36 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:05:29.635 11:57:36 -- nvmf/common.sh@7 -- # uname -s 00:05:29.635 11:57:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:29.635 11:57:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:29.635 11:57:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:29.635 11:57:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:29.635 11:57:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:29.635 11:57:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:29.635 11:57:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:29.635 11:57:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:29.635 11:57:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:29.635 11:57:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:29.635 11:57:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d40ca9-2a78-e711-906e-0017a4403562 00:05:29.635 11:57:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d40ca9-2a78-e711-906e-0017a4403562 00:05:29.635 11:57:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:29.635 11:57:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:29.635 11:57:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:29.635 11:57:36 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:05:29.635 11:57:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:29.635 11:57:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:29.635 11:57:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:29.635 11:57:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.635 11:57:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.635 11:57:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.635 11:57:36 -- paths/export.sh@5 -- # export PATH 00:05:29.636 11:57:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:29.636 11:57:36 -- nvmf/common.sh@46 -- # : 0 00:05:29.636 11:57:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:29.636 11:57:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:29.636 11:57:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:29.636 11:57:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:29.636 11:57:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:29.636 11:57:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:29.636 11:57:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:29.636 11:57:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:29.636 INFO: launching applications... 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1180398 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:29.636 Waiting for target to run... 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1180398 /var/tmp/spdk_tgt.sock 00:05:29.636 11:57:36 -- common/autotest_common.sh@819 -- # '[' -z 1180398 ']' 00:05:29.636 11:57:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:29.636 11:57:36 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/extra_key.json 00:05:29.636 11:57:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:29.636 11:57:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:29.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:29.636 11:57:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:29.636 11:57:36 -- common/autotest_common.sh@10 -- # set +x 00:05:29.896 [2024-07-25 11:57:36.957968] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:29.896 [2024-07-25 11:57:36.958032] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180398 ] 00:05:30.488 [2024-07-25 11:57:37.501690] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.488 [2024-07-25 11:57:37.598174] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:30.488 [2024-07-25 11:57:37.598310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.488 11:57:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:30.488 11:57:37 -- common/autotest_common.sh@852 -- # return 0 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:30.488 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:30.488 INFO: shutting down applications... 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1180398 ]] 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1180398 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1180398 00:05:30.488 11:57:37 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1180398 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:31.056 SPDK target shutdown done 00:05:31.056 11:57:38 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:31.056 Success 00:05:31.056 00:05:31.057 real 0m1.457s 00:05:31.057 user 0m0.835s 00:05:31.057 sys 0m0.672s 00:05:31.057 11:57:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:31.057 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 ************************************ 00:05:31.057 END TEST json_config_extra_key 00:05:31.057 ************************************ 00:05:31.057 11:57:38 -- spdk/autotest.sh@180 -- # run_test alias_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.057 11:57:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:31.057 11:57:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:31.057 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.057 ************************************ 00:05:31.057 START TEST alias_rpc 00:05:31.057 ************************************ 00:05:31.057 11:57:38 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:31.316 * Looking for test storage... 00:05:31.316 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/alias_rpc 00:05:31.316 11:57:38 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:31.316 11:57:38 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1180630 00:05:31.316 11:57:38 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1180630 00:05:31.316 11:57:38 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:31.316 11:57:38 -- common/autotest_common.sh@819 -- # '[' -z 1180630 ']' 00:05:31.316 11:57:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.316 11:57:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:31.316 11:57:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.316 11:57:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:31.316 11:57:38 -- common/autotest_common.sh@10 -- # set +x 00:05:31.316 [2024-07-25 11:57:38.446658] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:31.316 [2024-07-25 11:57:38.446722] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180630 ] 00:05:31.316 [2024-07-25 11:57:38.532484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.316 [2024-07-25 11:57:38.611997] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:31.316 [2024-07-25 11:57:38.612144] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.255 11:57:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:32.255 11:57:39 -- common/autotest_common.sh@852 -- # return 0 00:05:32.255 11:57:39 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:32.255 11:57:39 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1180630 00:05:32.255 11:57:39 -- common/autotest_common.sh@926 -- # '[' -z 1180630 ']' 00:05:32.255 11:57:39 -- common/autotest_common.sh@930 -- # kill -0 1180630 00:05:32.255 11:57:39 -- common/autotest_common.sh@931 -- # uname 00:05:32.255 11:57:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:32.255 11:57:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1180630 00:05:32.255 11:57:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:32.255 11:57:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:32.255 11:57:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1180630' 00:05:32.255 killing process with pid 1180630 00:05:32.255 11:57:39 -- common/autotest_common.sh@945 -- # kill 1180630 00:05:32.255 11:57:39 -- common/autotest_common.sh@950 -- # wait 1180630 00:05:32.825 00:05:32.825 real 0m1.551s 00:05:32.825 user 0m1.589s 00:05:32.825 sys 0m0.489s 00:05:32.825 11:57:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.825 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 ************************************ 00:05:32.825 END TEST alias_rpc 00:05:32.825 ************************************ 00:05:32.825 11:57:39 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:32.825 11:57:39 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:32.825 11:57:39 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:32.825 11:57:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:32.825 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 ************************************ 00:05:32.825 START TEST spdkcli_tcp 00:05:32.825 ************************************ 00:05:32.825 11:57:39 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:32.825 * Looking for test storage... 00:05:32.825 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/common.sh 00:05:32.825 11:57:39 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:32.825 11:57:39 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/json_config/clear_config.py 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:32.825 11:57:39 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:32.825 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1180863 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@27 -- # waitforlisten 1180863 00:05:32.825 11:57:39 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:32.825 11:57:39 -- common/autotest_common.sh@819 -- # '[' -z 1180863 ']' 00:05:32.825 11:57:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.825 11:57:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:32.825 11:57:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.825 11:57:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:32.825 11:57:39 -- common/autotest_common.sh@10 -- # set +x 00:05:32.825 [2024-07-25 11:57:40.052348] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:32.825 [2024-07-25 11:57:40.052406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1180863 ] 00:05:33.084 [2024-07-25 11:57:40.141638] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.084 [2024-07-25 11:57:40.231092] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:33.084 [2024-07-25 11:57:40.231301] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.084 [2024-07-25 11:57:40.231303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.651 11:57:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:33.651 11:57:40 -- common/autotest_common.sh@852 -- # return 0 00:05:33.651 11:57:40 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:33.651 11:57:40 -- spdkcli/tcp.sh@31 -- # socat_pid=1181036 00:05:33.651 11:57:40 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:33.912 [ 00:05:33.912 "bdev_malloc_delete", 00:05:33.912 "bdev_malloc_create", 00:05:33.912 "bdev_null_resize", 00:05:33.912 "bdev_null_delete", 00:05:33.912 "bdev_null_create", 00:05:33.912 "bdev_nvme_cuse_unregister", 00:05:33.912 "bdev_nvme_cuse_register", 00:05:33.912 "bdev_opal_new_user", 00:05:33.912 "bdev_opal_set_lock_state", 00:05:33.912 "bdev_opal_delete", 00:05:33.912 "bdev_opal_get_info", 00:05:33.912 "bdev_opal_create", 00:05:33.912 "bdev_nvme_opal_revert", 00:05:33.912 "bdev_nvme_opal_init", 00:05:33.912 "bdev_nvme_send_cmd", 00:05:33.912 "bdev_nvme_get_path_iostat", 00:05:33.912 "bdev_nvme_get_mdns_discovery_info", 00:05:33.912 "bdev_nvme_stop_mdns_discovery", 00:05:33.912 "bdev_nvme_start_mdns_discovery", 00:05:33.912 "bdev_nvme_set_multipath_policy", 00:05:33.912 "bdev_nvme_set_preferred_path", 00:05:33.912 "bdev_nvme_get_io_paths", 00:05:33.912 "bdev_nvme_remove_error_injection", 00:05:33.912 "bdev_nvme_add_error_injection", 00:05:33.912 "bdev_nvme_get_discovery_info", 00:05:33.912 "bdev_nvme_stop_discovery", 00:05:33.912 "bdev_nvme_start_discovery", 00:05:33.912 "bdev_nvme_get_controller_health_info", 00:05:33.912 "bdev_nvme_disable_controller", 00:05:33.912 "bdev_nvme_enable_controller", 00:05:33.912 "bdev_nvme_reset_controller", 00:05:33.912 "bdev_nvme_get_transport_statistics", 00:05:33.912 "bdev_nvme_apply_firmware", 00:05:33.912 "bdev_nvme_detach_controller", 00:05:33.912 "bdev_nvme_get_controllers", 00:05:33.912 "bdev_nvme_attach_controller", 00:05:33.912 "bdev_nvme_set_hotplug", 00:05:33.912 "bdev_nvme_set_options", 00:05:33.912 "bdev_passthru_delete", 00:05:33.912 "bdev_passthru_create", 00:05:33.912 "bdev_lvol_grow_lvstore", 00:05:33.912 "bdev_lvol_get_lvols", 00:05:33.912 "bdev_lvol_get_lvstores", 00:05:33.912 "bdev_lvol_delete", 00:05:33.912 "bdev_lvol_set_read_only", 00:05:33.912 "bdev_lvol_resize", 00:05:33.912 "bdev_lvol_decouple_parent", 00:05:33.912 "bdev_lvol_inflate", 00:05:33.912 "bdev_lvol_rename", 00:05:33.912 "bdev_lvol_clone_bdev", 00:05:33.912 "bdev_lvol_clone", 00:05:33.912 "bdev_lvol_snapshot", 00:05:33.912 "bdev_lvol_create", 00:05:33.912 "bdev_lvol_delete_lvstore", 00:05:33.912 "bdev_lvol_rename_lvstore", 00:05:33.912 "bdev_lvol_create_lvstore", 00:05:33.912 "bdev_raid_set_options", 00:05:33.912 "bdev_raid_remove_base_bdev", 00:05:33.912 "bdev_raid_add_base_bdev", 00:05:33.912 "bdev_raid_delete", 00:05:33.912 "bdev_raid_create", 00:05:33.912 "bdev_raid_get_bdevs", 00:05:33.912 "bdev_error_inject_error", 00:05:33.912 "bdev_error_delete", 00:05:33.912 "bdev_error_create", 00:05:33.912 "bdev_split_delete", 00:05:33.912 "bdev_split_create", 00:05:33.912 "bdev_delay_delete", 00:05:33.912 "bdev_delay_create", 00:05:33.912 "bdev_delay_update_latency", 00:05:33.912 "bdev_zone_block_delete", 00:05:33.912 "bdev_zone_block_create", 00:05:33.912 "blobfs_create", 00:05:33.912 "blobfs_detect", 00:05:33.912 "blobfs_set_cache_size", 00:05:33.912 "bdev_crypto_delete", 00:05:33.912 "bdev_crypto_create", 00:05:33.912 "bdev_compress_delete", 00:05:33.912 "bdev_compress_create", 00:05:33.912 "bdev_compress_get_orphans", 00:05:33.912 "bdev_aio_delete", 00:05:33.912 "bdev_aio_rescan", 00:05:33.912 "bdev_aio_create", 00:05:33.912 "bdev_ftl_set_property", 00:05:33.912 "bdev_ftl_get_properties", 00:05:33.913 "bdev_ftl_get_stats", 00:05:33.913 "bdev_ftl_unmap", 00:05:33.913 "bdev_ftl_unload", 00:05:33.913 "bdev_ftl_delete", 00:05:33.913 "bdev_ftl_load", 00:05:33.913 "bdev_ftl_create", 00:05:33.913 "bdev_virtio_attach_controller", 00:05:33.913 "bdev_virtio_scsi_get_devices", 00:05:33.913 "bdev_virtio_detach_controller", 00:05:33.913 "bdev_virtio_blk_set_hotplug", 00:05:33.913 "bdev_iscsi_delete", 00:05:33.913 "bdev_iscsi_create", 00:05:33.913 "bdev_iscsi_set_options", 00:05:33.913 "accel_error_inject_error", 00:05:33.913 "ioat_scan_accel_module", 00:05:33.913 "dsa_scan_accel_module", 00:05:33.913 "iaa_scan_accel_module", 00:05:33.913 "dpdk_cryptodev_get_driver", 00:05:33.913 "dpdk_cryptodev_set_driver", 00:05:33.913 "dpdk_cryptodev_scan_accel_module", 00:05:33.913 "compressdev_scan_accel_module", 00:05:33.913 "iscsi_set_options", 00:05:33.913 "iscsi_get_auth_groups", 00:05:33.913 "iscsi_auth_group_remove_secret", 00:05:33.913 "iscsi_auth_group_add_secret", 00:05:33.913 "iscsi_delete_auth_group", 00:05:33.913 "iscsi_create_auth_group", 00:05:33.913 "iscsi_set_discovery_auth", 00:05:33.913 "iscsi_get_options", 00:05:33.913 "iscsi_target_node_request_logout", 00:05:33.913 "iscsi_target_node_set_redirect", 00:05:33.913 "iscsi_target_node_set_auth", 00:05:33.913 "iscsi_target_node_add_lun", 00:05:33.913 "iscsi_get_connections", 00:05:33.913 "iscsi_portal_group_set_auth", 00:05:33.913 "iscsi_start_portal_group", 00:05:33.913 "iscsi_delete_portal_group", 00:05:33.913 "iscsi_create_portal_group", 00:05:33.913 "iscsi_get_portal_groups", 00:05:33.913 "iscsi_delete_target_node", 00:05:33.913 "iscsi_target_node_remove_pg_ig_maps", 00:05:33.913 "iscsi_target_node_add_pg_ig_maps", 00:05:33.913 "iscsi_create_target_node", 00:05:33.913 "iscsi_get_target_nodes", 00:05:33.913 "iscsi_delete_initiator_group", 00:05:33.913 "iscsi_initiator_group_remove_initiators", 00:05:33.913 "iscsi_initiator_group_add_initiators", 00:05:33.913 "iscsi_create_initiator_group", 00:05:33.913 "iscsi_get_initiator_groups", 00:05:33.913 "nvmf_set_crdt", 00:05:33.913 "nvmf_set_config", 00:05:33.913 "nvmf_set_max_subsystems", 00:05:33.913 "nvmf_subsystem_get_listeners", 00:05:33.913 "nvmf_subsystem_get_qpairs", 00:05:33.913 "nvmf_subsystem_get_controllers", 00:05:33.913 "nvmf_get_stats", 00:05:33.913 "nvmf_get_transports", 00:05:33.913 "nvmf_create_transport", 00:05:33.913 "nvmf_get_targets", 00:05:33.913 "nvmf_delete_target", 00:05:33.913 "nvmf_create_target", 00:05:33.913 "nvmf_subsystem_allow_any_host", 00:05:33.913 "nvmf_subsystem_remove_host", 00:05:33.913 "nvmf_subsystem_add_host", 00:05:33.913 "nvmf_subsystem_remove_ns", 00:05:33.913 "nvmf_subsystem_add_ns", 00:05:33.913 "nvmf_subsystem_listener_set_ana_state", 00:05:33.913 "nvmf_discovery_get_referrals", 00:05:33.913 "nvmf_discovery_remove_referral", 00:05:33.913 "nvmf_discovery_add_referral", 00:05:33.913 "nvmf_subsystem_remove_listener", 00:05:33.913 "nvmf_subsystem_add_listener", 00:05:33.913 "nvmf_delete_subsystem", 00:05:33.913 "nvmf_create_subsystem", 00:05:33.913 "nvmf_get_subsystems", 00:05:33.913 "env_dpdk_get_mem_stats", 00:05:33.913 "nbd_get_disks", 00:05:33.913 "nbd_stop_disk", 00:05:33.913 "nbd_start_disk", 00:05:33.913 "ublk_recover_disk", 00:05:33.913 "ublk_get_disks", 00:05:33.913 "ublk_stop_disk", 00:05:33.913 "ublk_start_disk", 00:05:33.913 "ublk_destroy_target", 00:05:33.913 "ublk_create_target", 00:05:33.913 "virtio_blk_create_transport", 00:05:33.913 "virtio_blk_get_transports", 00:05:33.913 "vhost_controller_set_coalescing", 00:05:33.913 "vhost_get_controllers", 00:05:33.913 "vhost_delete_controller", 00:05:33.913 "vhost_create_blk_controller", 00:05:33.913 "vhost_scsi_controller_remove_target", 00:05:33.913 "vhost_scsi_controller_add_target", 00:05:33.913 "vhost_start_scsi_controller", 00:05:33.913 "vhost_create_scsi_controller", 00:05:33.913 "thread_set_cpumask", 00:05:33.913 "framework_get_scheduler", 00:05:33.913 "framework_set_scheduler", 00:05:33.913 "framework_get_reactors", 00:05:33.913 "thread_get_io_channels", 00:05:33.913 "thread_get_pollers", 00:05:33.913 "thread_get_stats", 00:05:33.913 "framework_monitor_context_switch", 00:05:33.913 "spdk_kill_instance", 00:05:33.913 "log_enable_timestamps", 00:05:33.913 "log_get_flags", 00:05:33.913 "log_clear_flag", 00:05:33.913 "log_set_flag", 00:05:33.913 "log_get_level", 00:05:33.913 "log_set_level", 00:05:33.913 "log_get_print_level", 00:05:33.913 "log_set_print_level", 00:05:33.913 "framework_enable_cpumask_locks", 00:05:33.913 "framework_disable_cpumask_locks", 00:05:33.913 "framework_wait_init", 00:05:33.913 "framework_start_init", 00:05:33.913 "scsi_get_devices", 00:05:33.913 "bdev_get_histogram", 00:05:33.913 "bdev_enable_histogram", 00:05:33.913 "bdev_set_qos_limit", 00:05:33.913 "bdev_set_qd_sampling_period", 00:05:33.913 "bdev_get_bdevs", 00:05:33.913 "bdev_reset_iostat", 00:05:33.913 "bdev_get_iostat", 00:05:33.913 "bdev_examine", 00:05:33.913 "bdev_wait_for_examine", 00:05:33.913 "bdev_set_options", 00:05:33.913 "notify_get_notifications", 00:05:33.913 "notify_get_types", 00:05:33.913 "accel_get_stats", 00:05:33.913 "accel_set_options", 00:05:33.913 "accel_set_driver", 00:05:33.913 "accel_crypto_key_destroy", 00:05:33.913 "accel_crypto_keys_get", 00:05:33.913 "accel_crypto_key_create", 00:05:33.913 "accel_assign_opc", 00:05:33.913 "accel_get_module_info", 00:05:33.913 "accel_get_opc_assignments", 00:05:33.913 "vmd_rescan", 00:05:33.913 "vmd_remove_device", 00:05:33.913 "vmd_enable", 00:05:33.913 "sock_set_default_impl", 00:05:33.913 "sock_impl_set_options", 00:05:33.913 "sock_impl_get_options", 00:05:33.913 "iobuf_get_stats", 00:05:33.913 "iobuf_set_options", 00:05:33.913 "framework_get_pci_devices", 00:05:33.913 "framework_get_config", 00:05:33.913 "framework_get_subsystems", 00:05:33.913 "trace_get_info", 00:05:33.913 "trace_get_tpoint_group_mask", 00:05:33.913 "trace_disable_tpoint_group", 00:05:33.913 "trace_enable_tpoint_group", 00:05:33.913 "trace_clear_tpoint_mask", 00:05:33.913 "trace_set_tpoint_mask", 00:05:33.913 "spdk_get_version", 00:05:33.913 "rpc_get_methods" 00:05:33.913 ] 00:05:33.913 11:57:41 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:33.913 11:57:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:33.913 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:33.913 11:57:41 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:33.913 11:57:41 -- spdkcli/tcp.sh@38 -- # killprocess 1180863 00:05:33.913 11:57:41 -- common/autotest_common.sh@926 -- # '[' -z 1180863 ']' 00:05:33.913 11:57:41 -- common/autotest_common.sh@930 -- # kill -0 1180863 00:05:33.913 11:57:41 -- common/autotest_common.sh@931 -- # uname 00:05:33.913 11:57:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:33.913 11:57:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1180863 00:05:33.913 11:57:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:33.913 11:57:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:33.913 11:57:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1180863' 00:05:33.913 killing process with pid 1180863 00:05:33.913 11:57:41 -- common/autotest_common.sh@945 -- # kill 1180863 00:05:33.913 11:57:41 -- common/autotest_common.sh@950 -- # wait 1180863 00:05:34.200 00:05:34.200 real 0m1.587s 00:05:34.200 user 0m2.747s 00:05:34.200 sys 0m0.560s 00:05:34.200 11:57:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.200 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.200 ************************************ 00:05:34.200 END TEST spdkcli_tcp 00:05:34.200 ************************************ 00:05:34.459 11:57:41 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.459 11:57:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:34.459 11:57:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:34.459 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.459 ************************************ 00:05:34.459 START TEST dpdk_mem_utility 00:05:34.459 ************************************ 00:05:34.459 11:57:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:34.459 * Looking for test storage... 00:05:34.459 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/dpdk_memory_utility 00:05:34.459 11:57:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:34.459 11:57:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1181119 00:05:34.459 11:57:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt 00:05:34.459 11:57:41 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1181119 00:05:34.459 11:57:41 -- common/autotest_common.sh@819 -- # '[' -z 1181119 ']' 00:05:34.459 11:57:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.459 11:57:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:34.459 11:57:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.459 11:57:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:34.459 11:57:41 -- common/autotest_common.sh@10 -- # set +x 00:05:34.459 [2024-07-25 11:57:41.693433] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:34.459 [2024-07-25 11:57:41.693501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181119 ] 00:05:34.718 [2024-07-25 11:57:41.783374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.718 [2024-07-25 11:57:41.868152] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:34.718 [2024-07-25 11:57:41.868299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.287 11:57:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:35.287 11:57:42 -- common/autotest_common.sh@852 -- # return 0 00:05:35.287 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:35.287 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:35.287 11:57:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:35.287 11:57:42 -- common/autotest_common.sh@10 -- # set +x 00:05:35.287 { 00:05:35.287 "filename": "/tmp/spdk_mem_dump.txt" 00:05:35.287 } 00:05:35.287 11:57:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:35.287 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:35.287 DPDK memory size 816.000000 MiB in 2 heap(s) 00:05:35.287 2 heaps totaling size 816.000000 MiB 00:05:35.287 size: 814.000000 MiB heap id: 0 00:05:35.287 size: 2.000000 MiB heap id: 1 00:05:35.287 end heaps---------- 00:05:35.287 8 mempools totaling size 598.116089 MiB 00:05:35.287 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:35.287 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:35.287 size: 84.521057 MiB name: bdev_io_1181119 00:05:35.287 size: 51.011292 MiB name: evtpool_1181119 00:05:35.287 size: 50.003479 MiB name: msgpool_1181119 00:05:35.287 size: 21.763794 MiB name: PDU_Pool 00:05:35.287 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:35.287 size: 0.026123 MiB name: Session_Pool 00:05:35.287 end mempools------- 00:05:35.287 201 memzones totaling size 4.173523 MiB 00:05:35.287 size: 1.000366 MiB name: RG_ring_0_1181119 00:05:35.287 size: 1.000366 MiB name: RG_ring_1_1181119 00:05:35.287 size: 1.000366 MiB name: RG_ring_4_1181119 00:05:35.287 size: 1.000366 MiB name: RG_ring_5_1181119 00:05:35.287 size: 0.125366 MiB name: RG_ring_2_1181119 00:05:35.287 size: 0.015991 MiB name: RG_ring_3_1181119 00:05:35.287 size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:01.7_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3d:02.7_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:01.7_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:3f:02.7_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:01.7_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.0_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.1_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.2_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.3_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.4_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.5_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.6_qat 00:05:35.287 size: 0.000244 MiB name: 0000:da:02.7_qat 00:05:35.287 size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_0 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_0 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_1 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_2 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_1 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_3 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_4 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_2 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_5 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_6 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_3 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_7 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_8 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_4 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_9 00:05:35.287 size: 0.000122 MiB name: rte_cryptodev_data_10 00:05:35.287 size: 0.000122 MiB name: rte_compressdev_data_5 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_11 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_12 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_6 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_13 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_14 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_7 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_15 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_16 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_8 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_17 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_18 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_9 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_19 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_20 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_10 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_21 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_22 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_11 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_23 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_24 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_12 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_25 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_26 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_13 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_27 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_28 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_14 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_29 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_30 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_15 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_31 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_32 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_16 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_33 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_34 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_17 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_35 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_36 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_18 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_37 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_38 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_19 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_39 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_40 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_20 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_41 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_42 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_21 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_43 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_44 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_22 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_45 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_46 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_23 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_47 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_48 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_24 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_49 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_50 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_25 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_51 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_52 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_26 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_53 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_54 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_27 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_55 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_56 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_28 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_57 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_58 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_29 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_59 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_60 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_30 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_61 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_62 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_31 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_63 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_64 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_32 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_65 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_66 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_33 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_67 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_68 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_34 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_69 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_70 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_35 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_71 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_72 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_36 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_73 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_74 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_37 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_75 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_76 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_38 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_77 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_78 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_39 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_79 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_80 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_40 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_81 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_82 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_41 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_83 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_84 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_42 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_85 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_86 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_43 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_87 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_88 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_44 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_89 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_90 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_45 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_91 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_92 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_46 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_93 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_94 00:05:35.288 size: 0.000122 MiB name: rte_compressdev_data_47 00:05:35.288 size: 0.000122 MiB name: rte_cryptodev_data_95 00:05:35.288 size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:05:35.288 end memzones------- 00:05:35.288 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:35.551 heap id: 0 total size: 814.000000 MiB number of busy elements: 518 number of free elements: 14 00:05:35.551 list of free elements. size: 11.817932 MiB 00:05:35.551 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:35.551 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:35.551 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:35.551 element at address: 0x200003e00000 with size: 0.996460 MiB 00:05:35.551 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:35.551 element at address: 0x200013800000 with size: 0.978882 MiB 00:05:35.551 element at address: 0x200007000000 with size: 0.960022 MiB 00:05:35.551 element at address: 0x200019200000 with size: 0.937256 MiB 00:05:35.551 element at address: 0x20001aa00000 with size: 0.583252 MiB 00:05:35.551 element at address: 0x200003a00000 with size: 0.498535 MiB 00:05:35.551 element at address: 0x20000b200000 with size: 0.491272 MiB 00:05:35.551 element at address: 0x200000800000 with size: 0.487061 MiB 00:05:35.551 element at address: 0x200019400000 with size: 0.485840 MiB 00:05:35.551 element at address: 0x200027e00000 with size: 0.405640 MiB 00:05:35.551 list of standard malloc elements. size: 199.876709 MiB 00:05:35.551 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:35.551 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:35.551 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:35.551 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:35.551 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:35.551 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:35.551 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:35.551 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:35.551 element at address: 0x200000331700 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000334c40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000338180 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000033b6c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000033ec00 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000342140 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000345680 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000348bc0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000034c100 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000034f640 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000352b80 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003560c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000359600 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000035cb40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000360080 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003635c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000367040 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000036aac0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000036e540 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000371fc0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000375a40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003794c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000037cf40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003809c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000384440 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000387ec0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000038b940 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000038f3c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x200000392e40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003968c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000039a340 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000039ddc0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003a1840 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003a52c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003a8d40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003ac7c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003b0240 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003b3cc0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003b7740 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003bb1c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003bec40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003c26c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003c6140 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003c9bc0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003cd640 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003d10c0 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003d4b40 with size: 0.004395 MiB 00:05:35.551 element at address: 0x2000003d8d00 with size: 0.004395 MiB 00:05:35.551 element at address: 0x20000032f600 with size: 0.004028 MiB 00:05:35.551 element at address: 0x200000330680 with size: 0.004028 MiB 00:05:35.551 element at address: 0x200000332b40 with size: 0.004028 MiB 00:05:35.551 element at address: 0x200000333bc0 with size: 0.004028 MiB 00:05:35.551 element at address: 0x200000336080 with size: 0.004028 MiB 00:05:35.551 element at address: 0x200000337100 with size: 0.004028 MiB 00:05:35.551 element at address: 0x2000003395c0 with size: 0.004028 MiB 00:05:35.551 element at address: 0x20000033a640 with size: 0.004028 MiB 00:05:35.551 element at address: 0x20000033cb00 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000033db80 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000340040 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003410c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000343580 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000344600 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000346ac0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000347b40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000034a000 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000034b080 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000034d540 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000034e5c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000350a80 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000351b00 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000353fc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000355040 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000357500 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000358580 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000035aa40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000035bac0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000035df80 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000035f000 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003614c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000362540 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000364f40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000365fc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003689c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000369a40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000036c440 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000036d4c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000036fec0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000370f40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000373940 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003749c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003773c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000378440 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000037ae40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000037bec0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000037e8c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000037f940 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000382340 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003833c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000385dc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000386e40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000389840 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000038a8c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000038d2c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000038e340 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000390d40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000391dc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003947c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000395840 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000398240 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003992c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000039bcc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000039cd40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x20000039f740 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003a07c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003a31c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003a4240 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003a6c40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003a7cc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003aa6c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003ab740 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003ae140 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003af1c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003b1bc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003b2c40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003b5640 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003b66c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003b90c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003ba140 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003bcb40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003bdbc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c05c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c1640 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c4040 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c50c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c7ac0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003c8b40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003cb540 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003cc5c0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003cefc0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003d0040 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003d2a40 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003d3ac0 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003d6c00 with size: 0.004028 MiB 00:05:35.552 element at address: 0x2000003d7c80 with size: 0.004028 MiB 00:05:35.552 element at address: 0x200000205c40 with size: 0.000305 MiB 00:05:35.552 element at address: 0x200000200000 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002000c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200180 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200240 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200300 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002003c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200480 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200540 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200600 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002006c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200780 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200840 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200900 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002009c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200a80 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200b40 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200c00 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200cc0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200d80 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200e40 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200f00 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000200fc0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201080 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201140 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201200 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002012c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201380 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201440 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201500 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002015c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201680 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201740 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201800 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002018c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201980 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201a40 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201b00 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201bc0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201c80 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201d40 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201e00 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201ec0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000201f80 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202040 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202100 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002021c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202280 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202340 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202400 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002024c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202580 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202640 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202700 with size: 0.000183 MiB 00:05:35.552 element at address: 0x2000002027c0 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202880 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202940 with size: 0.000183 MiB 00:05:35.552 element at address: 0x200000202a00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202ac0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202b80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202c40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202d00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202dc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202e80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000202f40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203000 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002030c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203180 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203240 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203300 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002033c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203480 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203540 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203600 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002036c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203780 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203840 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203900 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002039c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203a80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203b40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203c00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203cc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203d80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203e40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203f00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000203fc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204080 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204140 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204200 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002042c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204380 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204440 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204500 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002045c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204680 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204740 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204800 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002048c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204980 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204a40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204b00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204bc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204c80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204d40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204e00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204ec0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000204f80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205040 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205100 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002051c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205280 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205340 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205400 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002054c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205580 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205640 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205700 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002057c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205880 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205940 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205a00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205ac0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205b80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205d80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205e40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205f00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000205fc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206080 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206140 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206200 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002062c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206380 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206440 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206500 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002065c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206680 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206740 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206800 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002068c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206980 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206a40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206b00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206bc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206c80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206d40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206e00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000206ec0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000002070c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000020b380 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022b640 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022b700 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022b7c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022b880 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022b940 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022ba00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bac0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bb80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bc40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bd00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bdc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022be80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022bf40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c000 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c0c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c180 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c240 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c300 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c3c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c5c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c680 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c740 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c800 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c8c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022c980 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022ca40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cb00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cbc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cc80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cd40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022ce00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cec0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022cf80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022d040 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000022d100 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000032f300 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000032f3c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000332900 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000335e40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000339380 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000033c8c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000033fe00 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000343340 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000346880 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000349dc0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000034d300 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000350840 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000353d80 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000003572c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000035a800 with size: 0.000183 MiB 00:05:35.553 element at address: 0x20000035dd40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000361280 with size: 0.000183 MiB 00:05:35.553 element at address: 0x2000003647c0 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000364980 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000364b40 with size: 0.000183 MiB 00:05:35.553 element at address: 0x200000364c00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000368240 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000368400 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003685c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000368680 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036bcc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036be80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036c040 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036c100 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036f740 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036f900 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036fac0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000036fb80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003731c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000373380 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000373540 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000373600 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000376c40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000376e00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000376fc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000377080 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037a6c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037a880 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037aa40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037ab00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037e140 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037e300 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037e4c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000037e580 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000381bc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000381d80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000381f40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000382000 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000385640 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000385800 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003859c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000385a80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003890c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000389280 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000389440 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000389500 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000038cb40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000038cd00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000038cec0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000038cf80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003905c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000390780 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000390940 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000390a00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000394040 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000394200 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003943c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000394480 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000397ac0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000397c80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000397e40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200000397f00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039b540 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039b700 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039b8c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039b980 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039efc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039f180 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039f340 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000039f400 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a2a40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a2c00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a2dc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a2e80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a64c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a6680 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a6840 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a6900 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003a9f40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003aa100 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003aa2c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003aa380 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003ad9c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003adb80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003add40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003ade00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b1440 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b1600 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b17c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b1880 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b4ec0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b5080 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b5240 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b5300 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b8940 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b8b00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b8cc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003b8d80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003bc3c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003bc580 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003bc740 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003bc800 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003bfe40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c0000 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c01c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c0280 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c38c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c3a80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c3c40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c3d00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c7340 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c7500 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c76c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003c7780 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cadc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003caf80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cb140 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cb200 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003ce840 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cea00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cebc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003cec80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d22c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d2480 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d2640 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d2700 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d5e80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d6100 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d6800 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000003d68c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e67d80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e67e40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6ea40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:35.554 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:35.555 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:35.555 list of memzone associated elements. size: 602.305359 MiB 00:05:35.555 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:35.555 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:35.555 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:35.555 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:35.555 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:35.555 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1181119_0 00:05:35.555 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:35.555 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1181119_0 00:05:35.555 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:35.555 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1181119_0 00:05:35.555 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:35.555 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:35.555 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:35.555 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:35.555 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:35.555 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1181119 00:05:35.555 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:35.555 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1181119 00:05:35.555 element at address: 0x20000022d1c0 with size: 1.008118 MiB 00:05:35.555 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1181119 00:05:35.555 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:35.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:35.555 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:35.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:35.555 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:35.555 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:35.555 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:35.555 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:35.555 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:35.555 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1181119 00:05:35.555 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:35.555 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1181119 00:05:35.555 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:35.555 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1181119 00:05:35.555 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:35.555 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1181119 00:05:35.555 element at address: 0x200003a7fa00 with size: 0.500488 MiB 00:05:35.555 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1181119 00:05:35.555 element at address: 0x20000b27dc40 with size: 0.500488 MiB 00:05:35.555 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:35.555 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:35.555 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:35.555 element at address: 0x20001947c600 with size: 0.250488 MiB 00:05:35.555 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:35.555 element at address: 0x20000020b440 with size: 0.125488 MiB 00:05:35.555 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1181119 00:05:35.555 element at address: 0x2000070f5c40 with size: 0.031738 MiB 00:05:35.555 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:35.555 element at address: 0x200027e67f00 with size: 0.023743 MiB 00:05:35.555 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:35.555 element at address: 0x200000207180 with size: 0.016113 MiB 00:05:35.555 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1181119 00:05:35.555 element at address: 0x200027e6e040 with size: 0.002441 MiB 00:05:35.555 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:35.555 element at address: 0x2000003d62c0 with size: 0.001282 MiB 00:05:35.555 associated memzone info: size: 0.001160 MiB name: QAT_SYM_CAPA_GEN_1 00:05:35.555 element at address: 0x2000003d6a80 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.0_qat 00:05:35.555 element at address: 0x2000003d28c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.1_qat 00:05:35.555 element at address: 0x2000003cee40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.2_qat 00:05:35.555 element at address: 0x2000003cb3c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.3_qat 00:05:35.555 element at address: 0x2000003c7940 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.4_qat 00:05:35.555 element at address: 0x2000003c3ec0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.5_qat 00:05:35.555 element at address: 0x2000003c0440 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.6_qat 00:05:35.555 element at address: 0x2000003bc9c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:01.7_qat 00:05:35.555 element at address: 0x2000003b8f40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.0_qat 00:05:35.555 element at address: 0x2000003b54c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.1_qat 00:05:35.555 element at address: 0x2000003b1a40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.2_qat 00:05:35.555 element at address: 0x2000003adfc0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.3_qat 00:05:35.555 element at address: 0x2000003aa540 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.4_qat 00:05:35.555 element at address: 0x2000003a6ac0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.5_qat 00:05:35.555 element at address: 0x2000003a3040 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.6_qat 00:05:35.555 element at address: 0x20000039f5c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3d:02.7_qat 00:05:35.555 element at address: 0x20000039bb40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.0_qat 00:05:35.555 element at address: 0x2000003980c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.1_qat 00:05:35.555 element at address: 0x200000394640 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.2_qat 00:05:35.555 element at address: 0x200000390bc0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.3_qat 00:05:35.555 element at address: 0x20000038d140 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.4_qat 00:05:35.555 element at address: 0x2000003896c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.5_qat 00:05:35.555 element at address: 0x200000385c40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.6_qat 00:05:35.555 element at address: 0x2000003821c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:01.7_qat 00:05:35.555 element at address: 0x20000037e740 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.0_qat 00:05:35.555 element at address: 0x20000037acc0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.1_qat 00:05:35.555 element at address: 0x200000377240 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.2_qat 00:05:35.555 element at address: 0x2000003737c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.3_qat 00:05:35.555 element at address: 0x20000036fd40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.4_qat 00:05:35.555 element at address: 0x20000036c2c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.5_qat 00:05:35.555 element at address: 0x200000368840 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.6_qat 00:05:35.555 element at address: 0x200000364dc0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:3f:02.7_qat 00:05:35.555 element at address: 0x200000361340 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.0_qat 00:05:35.555 element at address: 0x20000035de00 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.1_qat 00:05:35.555 element at address: 0x20000035a8c0 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.2_qat 00:05:35.555 element at address: 0x200000357380 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.3_qat 00:05:35.555 element at address: 0x200000353e40 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.4_qat 00:05:35.555 element at address: 0x200000350900 with size: 0.000366 MiB 00:05:35.555 associated memzone info: size: 0.000244 MiB name: 0000:da:01.5_qat 00:05:35.556 element at address: 0x20000034d3c0 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:01.6_qat 00:05:35.556 element at address: 0x200000349e80 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:01.7_qat 00:05:35.556 element at address: 0x200000346940 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.0_qat 00:05:35.556 element at address: 0x200000343400 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.1_qat 00:05:35.556 element at address: 0x20000033fec0 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.2_qat 00:05:35.556 element at address: 0x20000033c980 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.3_qat 00:05:35.556 element at address: 0x200000339440 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.4_qat 00:05:35.556 element at address: 0x200000335f00 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.5_qat 00:05:35.556 element at address: 0x2000003329c0 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.6_qat 00:05:35.556 element at address: 0x20000032f480 with size: 0.000366 MiB 00:05:35.556 associated memzone info: size: 0.000244 MiB name: 0000:da:02.7_qat 00:05:35.556 element at address: 0x2000003d5d40 with size: 0.000305 MiB 00:05:35.556 associated memzone info: size: 0.000183 MiB name: QAT_ASYM_CAPA_GEN_1 00:05:35.556 element at address: 0x20000022c480 with size: 0.000305 MiB 00:05:35.556 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1181119 00:05:35.556 element at address: 0x200000206f80 with size: 0.000305 MiB 00:05:35.556 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1181119 00:05:35.556 element at address: 0x200027e6eb00 with size: 0.000305 MiB 00:05:35.556 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:35.556 element at address: 0x2000003d6980 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_0 00:05:35.556 element at address: 0x2000003d61c0 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_0 00:05:35.556 element at address: 0x2000003d5f40 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_1 00:05:35.556 element at address: 0x2000003d27c0 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_2 00:05:35.556 element at address: 0x2000003d2540 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_1 00:05:35.556 element at address: 0x2000003d2380 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_3 00:05:35.556 element at address: 0x2000003ced40 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_4 00:05:35.556 element at address: 0x2000003ceac0 with size: 0.000244 MiB 00:05:35.556 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_2 00:05:35.557 element at address: 0x2000003ce900 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_5 00:05:35.557 element at address: 0x2000003cb2c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_6 00:05:35.557 element at address: 0x2000003cb040 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_3 00:05:35.557 element at address: 0x2000003cae80 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_7 00:05:35.557 element at address: 0x2000003c7840 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_8 00:05:35.557 element at address: 0x2000003c75c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_4 00:05:35.557 element at address: 0x2000003c7400 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_9 00:05:35.557 element at address: 0x2000003c3dc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_10 00:05:35.557 element at address: 0x2000003c3b40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_5 00:05:35.557 element at address: 0x2000003c3980 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_11 00:05:35.557 element at address: 0x2000003c0340 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_12 00:05:35.557 element at address: 0x2000003c00c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_6 00:05:35.557 element at address: 0x2000003bff00 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_13 00:05:35.557 element at address: 0x2000003bc8c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_14 00:05:35.557 element at address: 0x2000003bc640 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_7 00:05:35.557 element at address: 0x2000003bc480 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_15 00:05:35.557 element at address: 0x2000003b8e40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_16 00:05:35.557 element at address: 0x2000003b8bc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_8 00:05:35.557 element at address: 0x2000003b8a00 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_17 00:05:35.557 element at address: 0x2000003b53c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_18 00:05:35.557 element at address: 0x2000003b5140 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_9 00:05:35.557 element at address: 0x2000003b4f80 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_19 00:05:35.557 element at address: 0x2000003b1940 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_20 00:05:35.557 element at address: 0x2000003b16c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_10 00:05:35.557 element at address: 0x2000003b1500 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_21 00:05:35.557 element at address: 0x2000003adec0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_22 00:05:35.557 element at address: 0x2000003adc40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_11 00:05:35.557 element at address: 0x2000003ada80 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_23 00:05:35.557 element at address: 0x2000003aa440 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_24 00:05:35.557 element at address: 0x2000003aa1c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_12 00:05:35.557 element at address: 0x2000003aa000 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_25 00:05:35.557 element at address: 0x2000003a69c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_26 00:05:35.557 element at address: 0x2000003a6740 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_13 00:05:35.557 element at address: 0x2000003a6580 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_27 00:05:35.557 element at address: 0x2000003a2f40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_28 00:05:35.557 element at address: 0x2000003a2cc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_14 00:05:35.557 element at address: 0x2000003a2b00 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_29 00:05:35.557 element at address: 0x20000039f4c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_30 00:05:35.557 element at address: 0x20000039f240 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_15 00:05:35.557 element at address: 0x20000039f080 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_31 00:05:35.557 element at address: 0x20000039ba40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_32 00:05:35.557 element at address: 0x20000039b7c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_16 00:05:35.557 element at address: 0x20000039b600 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_33 00:05:35.557 element at address: 0x200000397fc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_34 00:05:35.557 element at address: 0x200000397d40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_17 00:05:35.557 element at address: 0x200000397b80 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_35 00:05:35.557 element at address: 0x200000394540 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_36 00:05:35.557 element at address: 0x2000003942c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_18 00:05:35.557 element at address: 0x200000394100 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_37 00:05:35.557 element at address: 0x200000390ac0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_38 00:05:35.557 element at address: 0x200000390840 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_19 00:05:35.557 element at address: 0x200000390680 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_39 00:05:35.557 element at address: 0x20000038d040 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_40 00:05:35.557 element at address: 0x20000038cdc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_20 00:05:35.557 element at address: 0x20000038cc00 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_41 00:05:35.557 element at address: 0x2000003895c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_42 00:05:35.557 element at address: 0x200000389340 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_21 00:05:35.557 element at address: 0x200000389180 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_43 00:05:35.557 element at address: 0x200000385b40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_44 00:05:35.557 element at address: 0x2000003858c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_22 00:05:35.557 element at address: 0x200000385700 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_45 00:05:35.557 element at address: 0x2000003820c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_46 00:05:35.557 element at address: 0x200000381e40 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_23 00:05:35.557 element at address: 0x200000381c80 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_47 00:05:35.557 element at address: 0x20000037e640 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_48 00:05:35.557 element at address: 0x20000037e3c0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_24 00:05:35.557 element at address: 0x20000037e200 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_49 00:05:35.557 element at address: 0x20000037abc0 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_50 00:05:35.557 element at address: 0x20000037a940 with size: 0.000244 MiB 00:05:35.557 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_25 00:05:35.557 element at address: 0x20000037a780 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_51 00:05:35.558 element at address: 0x200000377140 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_52 00:05:35.558 element at address: 0x200000376ec0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_26 00:05:35.558 element at address: 0x200000376d00 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_53 00:05:35.558 element at address: 0x2000003736c0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_54 00:05:35.558 element at address: 0x200000373440 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_27 00:05:35.558 element at address: 0x200000373280 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_55 00:05:35.558 element at address: 0x20000036fc40 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_56 00:05:35.558 element at address: 0x20000036f9c0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_28 00:05:35.558 element at address: 0x20000036f800 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_57 00:05:35.558 element at address: 0x20000036c1c0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_58 00:05:35.558 element at address: 0x20000036bf40 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_29 00:05:35.558 element at address: 0x20000036bd80 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_59 00:05:35.558 element at address: 0x200000368740 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_60 00:05:35.558 element at address: 0x2000003684c0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_30 00:05:35.558 element at address: 0x200000368300 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_61 00:05:35.558 element at address: 0x200000364cc0 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_62 00:05:35.558 element at address: 0x200000364a40 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_compressdev_data_31 00:05:35.558 element at address: 0x200000364880 with size: 0.000244 MiB 00:05:35.558 associated memzone info: size: 0.000122 MiB name: rte_cryptodev_data_63 00:05:35.558 element at address: 0x2000003d6040 with size: 0.000183 MiB 00:05:35.558 associated memzone info: size: 0.000061 MiB name: QAT_COMP_CAPA_GEN_1 00:05:35.558 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:35.558 11:57:42 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1181119 00:05:35.558 11:57:42 -- common/autotest_common.sh@926 -- # '[' -z 1181119 ']' 00:05:35.558 11:57:42 -- common/autotest_common.sh@930 -- # kill -0 1181119 00:05:35.558 11:57:42 -- common/autotest_common.sh@931 -- # uname 00:05:35.558 11:57:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:35.558 11:57:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1181119 00:05:35.558 11:57:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:35.558 11:57:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:35.558 11:57:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1181119' 00:05:35.558 killing process with pid 1181119 00:05:35.558 11:57:42 -- common/autotest_common.sh@945 -- # kill 1181119 00:05:35.558 11:57:42 -- common/autotest_common.sh@950 -- # wait 1181119 00:05:35.817 00:05:35.817 real 0m1.535s 00:05:35.817 user 0m1.555s 00:05:35.817 sys 0m0.490s 00:05:35.817 11:57:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.817 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.817 ************************************ 00:05:35.817 END TEST dpdk_mem_utility 00:05:35.817 ************************************ 00:05:35.817 11:57:43 -- spdk/autotest.sh@187 -- # run_test event /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:05:35.817 11:57:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:35.817 11:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:35.817 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:35.817 ************************************ 00:05:35.817 START TEST event 00:05:35.817 ************************************ 00:05:35.817 11:57:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event.sh 00:05:36.077 * Looking for test storage... 00:05:36.077 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event 00:05:36.077 11:57:43 -- event/event.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:36.077 11:57:43 -- bdev/nbd_common.sh@6 -- # set -e 00:05:36.077 11:57:43 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.077 11:57:43 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:36.077 11:57:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:36.077 11:57:43 -- common/autotest_common.sh@10 -- # set +x 00:05:36.077 ************************************ 00:05:36.077 START TEST event_perf 00:05:36.077 ************************************ 00:05:36.077 11:57:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:36.077 Running I/O for 1 seconds...[2024-07-25 11:57:43.241576] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:36.077 [2024-07-25 11:57:43.241636] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181362 ] 00:05:36.077 [2024-07-25 11:57:43.328923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:36.335 [2024-07-25 11:57:43.413567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.335 [2024-07-25 11:57:43.413656] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:36.335 [2024-07-25 11:57:43.413733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:36.335 [2024-07-25 11:57:43.413735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.272 Running I/O for 1 seconds... 00:05:37.272 lcore 0: 204053 00:05:37.272 lcore 1: 204052 00:05:37.272 lcore 2: 204052 00:05:37.272 lcore 3: 204052 00:05:37.272 done. 00:05:37.272 00:05:37.272 real 0m1.288s 00:05:37.272 user 0m4.183s 00:05:37.272 sys 0m0.102s 00:05:37.272 11:57:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:37.272 11:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.272 ************************************ 00:05:37.272 END TEST event_perf 00:05:37.272 ************************************ 00:05:37.272 11:57:44 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.272 11:57:44 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:37.272 11:57:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:37.272 11:57:44 -- common/autotest_common.sh@10 -- # set +x 00:05:37.272 ************************************ 00:05:37.272 START TEST event_reactor 00:05:37.272 ************************************ 00:05:37.272 11:57:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:37.531 [2024-07-25 11:57:44.587801] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:37.532 [2024-07-25 11:57:44.587866] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181558 ] 00:05:37.532 [2024-07-25 11:57:44.676714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.532 [2024-07-25 11:57:44.757359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.909 test_start 00:05:38.909 oneshot 00:05:38.909 tick 100 00:05:38.909 tick 100 00:05:38.909 tick 250 00:05:38.909 tick 100 00:05:38.909 tick 100 00:05:38.909 tick 100 00:05:38.909 tick 250 00:05:38.909 tick 500 00:05:38.909 tick 100 00:05:38.909 tick 100 00:05:38.909 tick 250 00:05:38.909 tick 100 00:05:38.909 tick 100 00:05:38.909 test_end 00:05:38.909 00:05:38.909 real 0m1.296s 00:05:38.909 user 0m1.191s 00:05:38.909 sys 0m0.099s 00:05:38.909 11:57:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.909 11:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.909 ************************************ 00:05:38.909 END TEST event_reactor 00:05:38.909 ************************************ 00:05:38.909 11:57:45 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.909 11:57:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:38.909 11:57:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.909 11:57:45 -- common/autotest_common.sh@10 -- # set +x 00:05:38.909 ************************************ 00:05:38.909 START TEST event_reactor_perf 00:05:38.909 ************************************ 00:05:38.909 11:57:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:38.909 [2024-07-25 11:57:45.923735] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:38.909 [2024-07-25 11:57:45.923810] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1181752 ] 00:05:38.909 [2024-07-25 11:57:46.012424] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:38.909 [2024-07-25 11:57:46.094912] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.289 test_start 00:05:40.289 test_end 00:05:40.289 Performance: 506944 events per second 00:05:40.289 00:05:40.289 real 0m1.298s 00:05:40.289 user 0m1.193s 00:05:40.289 sys 0m0.100s 00:05:40.289 11:57:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.289 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.289 ************************************ 00:05:40.289 END TEST event_reactor_perf 00:05:40.289 ************************************ 00:05:40.289 11:57:47 -- event/event.sh@49 -- # uname -s 00:05:40.289 11:57:47 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:40.289 11:57:47 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:40.289 11:57:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.289 11:57:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.289 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.289 ************************************ 00:05:40.289 START TEST event_scheduler 00:05:40.289 ************************************ 00:05:40.289 11:57:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:40.289 * Looking for test storage... 00:05:40.289 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler 00:05:40.289 11:57:47 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:40.289 11:57:47 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1182012 00:05:40.289 11:57:47 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:40.289 11:57:47 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:40.289 11:57:47 -- scheduler/scheduler.sh@37 -- # waitforlisten 1182012 00:05:40.289 11:57:47 -- common/autotest_common.sh@819 -- # '[' -z 1182012 ']' 00:05:40.289 11:57:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.289 11:57:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:40.289 11:57:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.289 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.289 11:57:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:40.289 11:57:47 -- common/autotest_common.sh@10 -- # set +x 00:05:40.289 [2024-07-25 11:57:47.383225] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:40.289 [2024-07-25 11:57:47.383293] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182012 ] 00:05:40.289 [2024-07-25 11:57:47.467906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:40.289 [2024-07-25 11:57:47.556140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.289 [2024-07-25 11:57:47.556217] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:40.289 [2024-07-25 11:57:47.556308] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:40.289 [2024-07-25 11:57:47.556310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.226 11:57:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:41.226 11:57:48 -- common/autotest_common.sh@852 -- # return 0 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 POWER: Env isn't set yet! 00:05:41.226 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:41.226 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:41.226 POWER: Cannot set governor of lcore 0 to userspace 00:05:41.226 POWER: Attempting to initialise PSTAT power management... 00:05:41.226 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:41.226 POWER: Initialized successfully for lcore 0 power management 00:05:41.226 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:41.226 POWER: Initialized successfully for lcore 1 power management 00:05:41.226 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:41.226 POWER: Initialized successfully for lcore 2 power management 00:05:41.226 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:41.226 POWER: Initialized successfully for lcore 3 power management 00:05:41.226 [2024-07-25 11:57:48.245443] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:41.226 [2024-07-25 11:57:48.245460] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:41.226 [2024-07-25 11:57:48.245469] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 [2024-07-25 11:57:48.328113] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:41.226 11:57:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.226 11:57:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 ************************************ 00:05:41.226 START TEST scheduler_create_thread 00:05:41.226 ************************************ 00:05:41.226 11:57:48 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 2 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 3 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 4 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 5 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 6 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 7 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.226 11:57:48 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:41.226 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.226 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.226 8 00:05:41.226 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:41.227 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.227 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 9 00:05:41.227 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:41.227 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.227 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 10 00:05:41.227 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:41.227 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.227 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:41.227 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.227 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:41.227 11:57:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:41.227 11:57:48 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:41.227 11:57:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:41.227 11:57:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.603 11:57:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.603 11:57:49 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:42.603 11:57:49 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:42.603 11:57:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.603 11:57:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.981 11:57:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.981 00:05:43.981 real 0m2.619s 00:05:43.981 user 0m0.024s 00:05:43.981 sys 0m0.007s 00:05:43.981 11:57:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.981 11:57:50 -- common/autotest_common.sh@10 -- # set +x 00:05:43.981 ************************************ 00:05:43.981 END TEST scheduler_create_thread 00:05:43.981 ************************************ 00:05:43.981 11:57:50 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:43.981 11:57:50 -- scheduler/scheduler.sh@46 -- # killprocess 1182012 00:05:43.981 11:57:50 -- common/autotest_common.sh@926 -- # '[' -z 1182012 ']' 00:05:43.981 11:57:50 -- common/autotest_common.sh@930 -- # kill -0 1182012 00:05:43.981 11:57:51 -- common/autotest_common.sh@931 -- # uname 00:05:43.981 11:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:43.981 11:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1182012 00:05:43.981 11:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:05:43.981 11:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:05:43.981 11:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1182012' 00:05:43.981 killing process with pid 1182012 00:05:43.981 11:57:51 -- common/autotest_common.sh@945 -- # kill 1182012 00:05:43.981 11:57:51 -- common/autotest_common.sh@950 -- # wait 1182012 00:05:44.240 [2024-07-25 11:57:51.434525] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:44.500 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:44.500 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:44.500 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:44.500 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:44.500 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:44.500 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:44.500 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:44.500 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:44.500 00:05:44.500 real 0m4.449s 00:05:44.500 user 0m8.249s 00:05:44.500 sys 0m0.419s 00:05:44.500 11:57:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.500 11:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.500 ************************************ 00:05:44.500 END TEST event_scheduler 00:05:44.500 ************************************ 00:05:44.500 11:57:51 -- event/event.sh@51 -- # modprobe -n nbd 00:05:44.500 11:57:51 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:44.500 11:57:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:44.500 11:57:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:44.500 11:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.500 ************************************ 00:05:44.500 START TEST app_repeat 00:05:44.500 ************************************ 00:05:44.500 11:57:51 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:05:44.500 11:57:51 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:44.500 11:57:51 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:44.500 11:57:51 -- event/event.sh@13 -- # local nbd_list 00:05:44.500 11:57:51 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:44.500 11:57:51 -- event/event.sh@14 -- # local bdev_list 00:05:44.500 11:57:51 -- event/event.sh@15 -- # local repeat_times=4 00:05:44.500 11:57:51 -- event/event.sh@17 -- # modprobe nbd 00:05:44.500 11:57:51 -- event/event.sh@19 -- # repeat_pid=1182734 00:05:44.500 11:57:51 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:44.500 11:57:51 -- event/event.sh@18 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:44.500 11:57:51 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1182734' 00:05:44.500 Process app_repeat pid: 1182734 00:05:44.500 11:57:51 -- event/event.sh@23 -- # for i in {0..2} 00:05:44.500 11:57:51 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:44.500 spdk_app_start Round 0 00:05:44.500 11:57:51 -- event/event.sh@25 -- # waitforlisten 1182734 /var/tmp/spdk-nbd.sock 00:05:44.500 11:57:51 -- common/autotest_common.sh@819 -- # '[' -z 1182734 ']' 00:05:44.500 11:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:44.500 11:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:44.500 11:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:44.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:44.500 11:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:44.500 11:57:51 -- common/autotest_common.sh@10 -- # set +x 00:05:44.500 [2024-07-25 11:57:51.775542] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:05:44.500 [2024-07-25 11:57:51.775605] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182734 ] 00:05:44.759 [2024-07-25 11:57:51.860709] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:44.759 [2024-07-25 11:57:51.941762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.759 [2024-07-25 11:57:51.941766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.326 11:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:45.326 11:57:52 -- common/autotest_common.sh@852 -- # return 0 00:05:45.326 11:57:52 -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.585 Malloc0 00:05:45.585 11:57:52 -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.844 Malloc1 00:05:45.844 11:57:52 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@12 -- # local i 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.844 11:57:52 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:46.103 /dev/nbd0 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:46.103 11:57:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:46.103 11:57:53 -- common/autotest_common.sh@857 -- # local i 00:05:46.103 11:57:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:46.103 11:57:53 -- common/autotest_common.sh@861 -- # break 00:05:46.103 11:57:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.103 1+0 records in 00:05:46.103 1+0 records out 00:05:46.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246342 s, 16.6 MB/s 00:05:46.103 11:57:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:46.103 11:57:53 -- common/autotest_common.sh@874 -- # size=4096 00:05:46.103 11:57:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:46.103 11:57:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:46.103 11:57:53 -- common/autotest_common.sh@877 -- # return 0 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.103 /dev/nbd1 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.103 11:57:53 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.103 11:57:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:46.103 11:57:53 -- common/autotest_common.sh@857 -- # local i 00:05:46.103 11:57:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:46.103 11:57:53 -- common/autotest_common.sh@861 -- # break 00:05:46.103 11:57:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:46.103 11:57:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.103 1+0 records in 00:05:46.103 1+0 records out 00:05:46.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000245901 s, 16.7 MB/s 00:05:46.362 11:57:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:46.362 11:57:53 -- common/autotest_common.sh@874 -- # size=4096 00:05:46.362 11:57:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:46.362 11:57:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:46.362 11:57:53 -- common/autotest_common.sh@877 -- # return 0 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.362 { 00:05:46.362 "nbd_device": "/dev/nbd0", 00:05:46.362 "bdev_name": "Malloc0" 00:05:46.362 }, 00:05:46.362 { 00:05:46.362 "nbd_device": "/dev/nbd1", 00:05:46.362 "bdev_name": "Malloc1" 00:05:46.362 } 00:05:46.362 ]' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.362 { 00:05:46.362 "nbd_device": "/dev/nbd0", 00:05:46.362 "bdev_name": "Malloc0" 00:05:46.362 }, 00:05:46.362 { 00:05:46.362 "nbd_device": "/dev/nbd1", 00:05:46.362 "bdev_name": "Malloc1" 00:05:46.362 } 00:05:46.362 ]' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.362 /dev/nbd1' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.362 /dev/nbd1' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.362 256+0 records in 00:05:46.362 256+0 records out 00:05:46.362 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114704 s, 91.4 MB/s 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.362 11:57:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.621 256+0 records in 00:05:46.621 256+0 records out 00:05:46.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204166 s, 51.4 MB/s 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.621 256+0 records in 00:05:46.621 256+0 records out 00:05:46.621 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0158265 s, 66.3 MB/s 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@51 -- # local i 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@41 -- # break 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.621 11:57:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@41 -- # break 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.880 11:57:54 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@65 -- # true 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.139 11:57:54 -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.139 11:57:54 -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.398 11:57:54 -- event/event.sh@35 -- # sleep 3 00:05:47.657 [2024-07-25 11:57:54.760382] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.657 [2024-07-25 11:57:54.840347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.657 [2024-07-25 11:57:54.840350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.657 [2024-07-25 11:57:54.888766] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.657 [2024-07-25 11:57:54.888808] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.946 11:57:57 -- event/event.sh@23 -- # for i in {0..2} 00:05:50.946 11:57:57 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:50.946 spdk_app_start Round 1 00:05:50.946 11:57:57 -- event/event.sh@25 -- # waitforlisten 1182734 /var/tmp/spdk-nbd.sock 00:05:50.946 11:57:57 -- common/autotest_common.sh@819 -- # '[' -z 1182734 ']' 00:05:50.946 11:57:57 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.946 11:57:57 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:50.946 11:57:57 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.946 11:57:57 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:50.946 11:57:57 -- common/autotest_common.sh@10 -- # set +x 00:05:50.946 11:57:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:50.946 11:57:57 -- common/autotest_common.sh@852 -- # return 0 00:05:50.946 11:57:57 -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.946 Malloc0 00:05:50.946 11:57:57 -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.946 Malloc1 00:05:50.946 11:57:58 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@12 -- # local i 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.946 11:57:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:50.946 /dev/nbd0 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.208 11:57:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:51.208 11:57:58 -- common/autotest_common.sh@857 -- # local i 00:05:51.208 11:57:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:51.208 11:57:58 -- common/autotest_common.sh@861 -- # break 00:05:51.208 11:57:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.208 1+0 records in 00:05:51.208 1+0 records out 00:05:51.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000122841 s, 33.3 MB/s 00:05:51.208 11:57:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:51.208 11:57:58 -- common/autotest_common.sh@874 -- # size=4096 00:05:51.208 11:57:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:51.208 11:57:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:51.208 11:57:58 -- common/autotest_common.sh@877 -- # return 0 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.208 /dev/nbd1 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.208 11:57:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:51.208 11:57:58 -- common/autotest_common.sh@857 -- # local i 00:05:51.208 11:57:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:51.208 11:57:58 -- common/autotest_common.sh@861 -- # break 00:05:51.208 11:57:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:51.208 11:57:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.208 1+0 records in 00:05:51.208 1+0 records out 00:05:51.208 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000231648 s, 17.7 MB/s 00:05:51.208 11:57:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:51.208 11:57:58 -- common/autotest_common.sh@874 -- # size=4096 00:05:51.208 11:57:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:51.208 11:57:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:51.208 11:57:58 -- common/autotest_common.sh@877 -- # return 0 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.208 11:57:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.468 { 00:05:51.468 "nbd_device": "/dev/nbd0", 00:05:51.468 "bdev_name": "Malloc0" 00:05:51.468 }, 00:05:51.468 { 00:05:51.468 "nbd_device": "/dev/nbd1", 00:05:51.468 "bdev_name": "Malloc1" 00:05:51.468 } 00:05:51.468 ]' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.468 { 00:05:51.468 "nbd_device": "/dev/nbd0", 00:05:51.468 "bdev_name": "Malloc0" 00:05:51.468 }, 00:05:51.468 { 00:05:51.468 "nbd_device": "/dev/nbd1", 00:05:51.468 "bdev_name": "Malloc1" 00:05:51.468 } 00:05:51.468 ]' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.468 /dev/nbd1' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.468 /dev/nbd1' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.468 256+0 records in 00:05:51.468 256+0 records out 00:05:51.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0114392 s, 91.7 MB/s 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.468 256+0 records in 00:05:51.468 256+0 records out 00:05:51.468 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0200464 s, 52.3 MB/s 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.468 11:57:58 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.727 256+0 records in 00:05:51.727 256+0 records out 00:05:51.727 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214875 s, 48.8 MB/s 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@51 -- # local i 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@41 -- # break 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.727 11:57:58 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@41 -- # break 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.986 11:57:59 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@65 -- # true 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.245 11:57:59 -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.245 11:57:59 -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.504 11:57:59 -- event/event.sh@35 -- # sleep 3 00:05:52.799 [2024-07-25 11:57:59.844180] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.799 [2024-07-25 11:57:59.924601] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.799 [2024-07-25 11:57:59.924603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.799 [2024-07-25 11:57:59.973046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.799 [2024-07-25 11:57:59.973087] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.347 11:58:02 -- event/event.sh@23 -- # for i in {0..2} 00:05:55.347 11:58:02 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:55.347 spdk_app_start Round 2 00:05:55.347 11:58:02 -- event/event.sh@25 -- # waitforlisten 1182734 /var/tmp/spdk-nbd.sock 00:05:55.347 11:58:02 -- common/autotest_common.sh@819 -- # '[' -z 1182734 ']' 00:05:55.347 11:58:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.347 11:58:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:55.347 11:58:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.347 11:58:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:55.347 11:58:02 -- common/autotest_common.sh@10 -- # set +x 00:05:55.605 11:58:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.605 11:58:02 -- common/autotest_common.sh@852 -- # return 0 00:05:55.605 11:58:02 -- event/event.sh@27 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.864 Malloc0 00:05:55.864 11:58:02 -- event/event.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:55.864 Malloc1 00:05:55.864 11:58:03 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@12 -- # local i 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:55.864 11:58:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:56.122 /dev/nbd0 00:05:56.122 11:58:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:56.122 11:58:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:56.122 11:58:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:05:56.122 11:58:03 -- common/autotest_common.sh@857 -- # local i 00:05:56.122 11:58:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:56.122 11:58:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:56.122 11:58:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:05:56.122 11:58:03 -- common/autotest_common.sh@861 -- # break 00:05:56.122 11:58:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:56.122 11:58:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:56.122 11:58:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.122 1+0 records in 00:05:56.122 1+0 records out 00:05:56.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260814 s, 15.7 MB/s 00:05:56.122 11:58:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:56.122 11:58:03 -- common/autotest_common.sh@874 -- # size=4096 00:05:56.122 11:58:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:56.122 11:58:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:56.122 11:58:03 -- common/autotest_common.sh@877 -- # return 0 00:05:56.122 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.122 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.122 11:58:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:56.380 /dev/nbd1 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:56.380 11:58:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:05:56.380 11:58:03 -- common/autotest_common.sh@857 -- # local i 00:05:56.380 11:58:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:05:56.380 11:58:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:05:56.380 11:58:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:05:56.380 11:58:03 -- common/autotest_common.sh@861 -- # break 00:05:56.380 11:58:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:05:56.380 11:58:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:05:56.380 11:58:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:56.380 1+0 records in 00:05:56.380 1+0 records out 00:05:56.380 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000262444 s, 15.6 MB/s 00:05:56.380 11:58:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:56.380 11:58:03 -- common/autotest_common.sh@874 -- # size=4096 00:05:56.380 11:58:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdtest 00:05:56.380 11:58:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:05:56.380 11:58:03 -- common/autotest_common.sh@877 -- # return 0 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.380 11:58:03 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:56.638 11:58:03 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:56.638 { 00:05:56.638 "nbd_device": "/dev/nbd0", 00:05:56.638 "bdev_name": "Malloc0" 00:05:56.638 }, 00:05:56.638 { 00:05:56.638 "nbd_device": "/dev/nbd1", 00:05:56.638 "bdev_name": "Malloc1" 00:05:56.638 } 00:05:56.638 ]' 00:05:56.638 11:58:03 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:56.638 { 00:05:56.638 "nbd_device": "/dev/nbd0", 00:05:56.638 "bdev_name": "Malloc0" 00:05:56.638 }, 00:05:56.638 { 00:05:56.638 "nbd_device": "/dev/nbd1", 00:05:56.638 "bdev_name": "Malloc1" 00:05:56.638 } 00:05:56.638 ]' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:56.639 /dev/nbd1' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:56.639 /dev/nbd1' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@65 -- # count=2 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@95 -- # count=2 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:56.639 256+0 records in 00:05:56.639 256+0 records out 00:05:56.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0109984 s, 95.3 MB/s 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:56.639 256+0 records in 00:05:56.639 256+0 records out 00:05:56.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204053 s, 51.4 MB/s 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:56.639 256+0 records in 00:05:56.639 256+0 records out 00:05:56.639 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212178 s, 49.4 MB/s 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/event/nbdrandtest 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@51 -- # local i 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.639 11:58:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@41 -- # break 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:56.897 11:58:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@41 -- # break 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@65 -- # true 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@65 -- # count=0 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@104 -- # count=0 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:57.156 11:58:04 -- bdev/nbd_common.sh@109 -- # return 0 00:05:57.156 11:58:04 -- event/event.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:57.415 11:58:04 -- event/event.sh@35 -- # sleep 3 00:05:57.674 [2024-07-25 11:58:04.874025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.674 [2024-07-25 11:58:04.961512] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.674 [2024-07-25 11:58:04.961516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.932 [2024-07-25 11:58:05.010426] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:57.932 [2024-07-25 11:58:05.010472] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:00.464 11:58:07 -- event/event.sh@38 -- # waitforlisten 1182734 /var/tmp/spdk-nbd.sock 00:06:00.464 11:58:07 -- common/autotest_common.sh@819 -- # '[' -z 1182734 ']' 00:06:00.464 11:58:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:00.464 11:58:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:00.464 11:58:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:00.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:00.464 11:58:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:00.464 11:58:07 -- common/autotest_common.sh@10 -- # set +x 00:06:00.723 11:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:00.723 11:58:07 -- common/autotest_common.sh@852 -- # return 0 00:06:00.723 11:58:07 -- event/event.sh@39 -- # killprocess 1182734 00:06:00.723 11:58:07 -- common/autotest_common.sh@926 -- # '[' -z 1182734 ']' 00:06:00.723 11:58:07 -- common/autotest_common.sh@930 -- # kill -0 1182734 00:06:00.723 11:58:07 -- common/autotest_common.sh@931 -- # uname 00:06:00.723 11:58:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:00.723 11:58:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1182734 00:06:00.723 11:58:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:00.723 11:58:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:00.723 11:58:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1182734' 00:06:00.723 killing process with pid 1182734 00:06:00.723 11:58:07 -- common/autotest_common.sh@945 -- # kill 1182734 00:06:00.723 11:58:07 -- common/autotest_common.sh@950 -- # wait 1182734 00:06:00.982 spdk_app_start is called in Round 0. 00:06:00.982 Shutdown signal received, stop current app iteration 00:06:00.982 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:00.982 spdk_app_start is called in Round 1. 00:06:00.982 Shutdown signal received, stop current app iteration 00:06:00.982 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:00.982 spdk_app_start is called in Round 2. 00:06:00.982 Shutdown signal received, stop current app iteration 00:06:00.982 Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 reinitialization... 00:06:00.982 spdk_app_start is called in Round 3. 00:06:00.982 Shutdown signal received, stop current app iteration 00:06:00.982 11:58:08 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:00.982 11:58:08 -- event/event.sh@42 -- # return 0 00:06:00.982 00:06:00.982 real 0m16.337s 00:06:00.982 user 0m34.575s 00:06:00.982 sys 0m3.097s 00:06:00.982 11:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.982 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.982 ************************************ 00:06:00.982 END TEST app_repeat 00:06:00.982 ************************************ 00:06:00.982 11:58:08 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:00.982 00:06:00.982 real 0m24.999s 00:06:00.982 user 0m49.508s 00:06:00.982 sys 0m4.072s 00:06:00.982 11:58:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.982 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.982 ************************************ 00:06:00.982 END TEST event 00:06:00.982 ************************************ 00:06:00.982 11:58:08 -- spdk/autotest.sh@188 -- # run_test thread /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:00.982 11:58:08 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:00.982 11:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.982 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.982 ************************************ 00:06:00.982 START TEST thread 00:06:00.982 ************************************ 00:06:00.982 11:58:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/thread.sh 00:06:00.982 * Looking for test storage... 00:06:00.983 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread 00:06:00.983 11:58:08 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:00.983 11:58:08 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:00.983 11:58:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.983 11:58:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.983 ************************************ 00:06:00.983 START TEST thread_poller_perf 00:06:00.983 ************************************ 00:06:00.983 11:58:08 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:01.242 [2024-07-25 11:58:08.293467] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:01.242 [2024-07-25 11:58:08.293539] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185129 ] 00:06:01.242 [2024-07-25 11:58:08.381903] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.242 [2024-07-25 11:58:08.464542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.242 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:02.621 ====================================== 00:06:02.621 busy:2305936276 (cyc) 00:06:02.621 total_run_count: 403000 00:06:02.621 tsc_hz: 2300000000 (cyc) 00:06:02.621 ====================================== 00:06:02.621 poller_cost: 5721 (cyc), 2487 (nsec) 00:06:02.621 00:06:02.621 real 0m1.307s 00:06:02.621 user 0m1.191s 00:06:02.621 sys 0m0.110s 00:06:02.621 11:58:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.621 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.621 ************************************ 00:06:02.621 END TEST thread_poller_perf 00:06:02.621 ************************************ 00:06:02.621 11:58:09 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.621 11:58:09 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:02.621 11:58:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.621 11:58:09 -- common/autotest_common.sh@10 -- # set +x 00:06:02.621 ************************************ 00:06:02.621 START TEST thread_poller_perf 00:06:02.621 ************************************ 00:06:02.621 11:58:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:02.621 [2024-07-25 11:58:09.634096] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:02.621 [2024-07-25 11:58:09.634149] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185341 ] 00:06:02.621 [2024-07-25 11:58:09.712497] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.621 [2024-07-25 11:58:09.791946] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.621 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:03.999 ====================================== 00:06:03.999 busy:2302050680 (cyc) 00:06:03.999 total_run_count: 5483000 00:06:03.999 tsc_hz: 2300000000 (cyc) 00:06:03.999 ====================================== 00:06:04.000 poller_cost: 419 (cyc), 182 (nsec) 00:06:04.000 00:06:04.000 real 0m1.277s 00:06:04.000 user 0m1.179s 00:06:04.000 sys 0m0.092s 00:06:04.000 11:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.000 11:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 ************************************ 00:06:04.000 END TEST thread_poller_perf 00:06:04.000 ************************************ 00:06:04.000 11:58:10 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:04.000 00:06:04.000 real 0m2.769s 00:06:04.000 user 0m2.441s 00:06:04.000 sys 0m0.341s 00:06:04.000 11:58:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.000 11:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 ************************************ 00:06:04.000 END TEST thread 00:06:04.000 ************************************ 00:06:04.000 11:58:10 -- spdk/autotest.sh@189 -- # run_test accel /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:04.000 11:58:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:04.000 11:58:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:04.000 11:58:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 ************************************ 00:06:04.000 START TEST accel 00:06:04.000 ************************************ 00:06:04.000 11:58:10 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel.sh 00:06:04.000 * Looking for test storage... 00:06:04.000 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:06:04.000 11:58:11 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:04.000 11:58:11 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:04.000 11:58:11 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:04.000 11:58:11 -- accel/accel.sh@59 -- # spdk_tgt_pid=1185585 00:06:04.000 11:58:11 -- accel/accel.sh@60 -- # waitforlisten 1185585 00:06:04.000 11:58:11 -- common/autotest_common.sh@819 -- # '[' -z 1185585 ']' 00:06:04.000 11:58:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.000 11:58:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:04.000 11:58:11 -- accel/accel.sh@58 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:04.000 11:58:11 -- accel/accel.sh@58 -- # build_accel_config 00:06:04.000 11:58:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.000 11:58:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:04.000 11:58:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:04.000 11:58:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.000 11:58:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.000 11:58:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.000 11:58:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:04.000 11:58:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:04.000 11:58:11 -- accel/accel.sh@41 -- # local IFS=, 00:06:04.000 11:58:11 -- accel/accel.sh@42 -- # jq -r . 00:06:04.000 [2024-07-25 11:58:11.120467] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:04.000 [2024-07-25 11:58:11.120527] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185585 ] 00:06:04.000 [2024-07-25 11:58:11.205467] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.000 [2024-07-25 11:58:11.284247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:04.000 [2024-07-25 11:58:11.284384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.937 11:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:04.937 11:58:11 -- common/autotest_common.sh@852 -- # return 0 00:06:04.937 11:58:11 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:04.937 11:58:11 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:04.937 11:58:11 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:04.937 11:58:11 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:04.937 11:58:11 -- common/autotest_common.sh@10 -- # set +x 00:06:04.937 11:58:11 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.937 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.937 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.937 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.937 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.937 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.937 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.937 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # IFS== 00:06:04.938 11:58:11 -- accel/accel.sh@64 -- # read -r opc module 00:06:04.938 11:58:11 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:04.938 11:58:11 -- accel/accel.sh@67 -- # killprocess 1185585 00:06:04.938 11:58:11 -- common/autotest_common.sh@926 -- # '[' -z 1185585 ']' 00:06:04.938 11:58:11 -- common/autotest_common.sh@930 -- # kill -0 1185585 00:06:04.938 11:58:11 -- common/autotest_common.sh@931 -- # uname 00:06:04.938 11:58:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.938 11:58:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1185585 00:06:04.938 11:58:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:04.938 11:58:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:04.938 11:58:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1185585' 00:06:04.938 killing process with pid 1185585 00:06:04.938 11:58:12 -- common/autotest_common.sh@945 -- # kill 1185585 00:06:04.938 11:58:12 -- common/autotest_common.sh@950 -- # wait 1185585 00:06:05.198 11:58:12 -- accel/accel.sh@68 -- # trap - ERR 00:06:05.198 11:58:12 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:05.198 11:58:12 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:05.198 11:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.198 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.198 11:58:12 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:05.198 11:58:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:05.198 11:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.198 11:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.198 11:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.198 11:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.198 11:58:12 -- accel/accel.sh@42 -- # jq -r . 00:06:05.198 11:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.198 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.198 11:58:12 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:05.198 11:58:12 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:05.198 11:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.198 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.198 ************************************ 00:06:05.198 START TEST accel_missing_filename 00:06:05.198 ************************************ 00:06:05.198 11:58:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:05.198 11:58:12 -- common/autotest_common.sh@640 -- # local es=0 00:06:05.198 11:58:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:05.198 11:58:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:05.198 11:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.198 11:58:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:05.198 11:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.198 11:58:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:05.198 11:58:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:05.198 11:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.198 11:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.198 11:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.198 11:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.198 11:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.198 11:58:12 -- accel/accel.sh@42 -- # jq -r . 00:06:05.198 [2024-07-25 11:58:12.496180] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.198 [2024-07-25 11:58:12.496257] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185795 ] 00:06:05.457 [2024-07-25 11:58:12.584852] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.457 [2024-07-25 11:58:12.671498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.457 [2024-07-25 11:58:12.733874] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.725 [2024-07-25 11:58:12.803966] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:05.725 A filename is required. 00:06:05.725 11:58:12 -- common/autotest_common.sh@643 -- # es=234 00:06:05.725 11:58:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:05.725 11:58:12 -- common/autotest_common.sh@652 -- # es=106 00:06:05.725 11:58:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:05.725 11:58:12 -- common/autotest_common.sh@660 -- # es=1 00:06:05.725 11:58:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:05.725 00:06:05.725 real 0m0.452s 00:06:05.725 user 0m0.312s 00:06:05.725 sys 0m0.164s 00:06:05.725 11:58:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.725 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.725 ************************************ 00:06:05.725 END TEST accel_missing_filename 00:06:05.725 ************************************ 00:06:05.725 11:58:12 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:05.725 11:58:12 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:05.725 11:58:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.725 11:58:12 -- common/autotest_common.sh@10 -- # set +x 00:06:05.725 ************************************ 00:06:05.725 START TEST accel_compress_verify 00:06:05.725 ************************************ 00:06:05.725 11:58:12 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:05.725 11:58:12 -- common/autotest_common.sh@640 -- # local es=0 00:06:05.725 11:58:12 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:05.725 11:58:12 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:05.725 11:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.725 11:58:12 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:05.725 11:58:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:05.725 11:58:12 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:05.725 11:58:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:05.725 11:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.725 11:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:05.725 11:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.725 11:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.725 11:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:05.725 11:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:05.725 11:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:06:05.725 11:58:12 -- accel/accel.sh@42 -- # jq -r . 00:06:05.726 [2024-07-25 11:58:12.998565] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:05.726 [2024-07-25 11:58:12.998633] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185849 ] 00:06:05.985 [2024-07-25 11:58:13.088250] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.985 [2024-07-25 11:58:13.171225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.985 [2024-07-25 11:58:13.234169] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:06.245 [2024-07-25 11:58:13.297557] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:06.245 00:06:06.245 Compression does not support the verify option, aborting. 00:06:06.245 11:58:13 -- common/autotest_common.sh@643 -- # es=161 00:06:06.245 11:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.245 11:58:13 -- common/autotest_common.sh@652 -- # es=33 00:06:06.245 11:58:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:06.245 11:58:13 -- common/autotest_common.sh@660 -- # es=1 00:06:06.245 11:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.245 00:06:06.245 real 0m0.438s 00:06:06.245 user 0m0.296s 00:06:06.245 sys 0m0.165s 00:06:06.245 11:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.245 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.245 ************************************ 00:06:06.245 END TEST accel_compress_verify 00:06:06.245 ************************************ 00:06:06.245 11:58:13 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:06.245 11:58:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:06.245 11:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.245 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.245 ************************************ 00:06:06.245 START TEST accel_wrong_workload 00:06:06.245 ************************************ 00:06:06.245 11:58:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:06.245 11:58:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.245 11:58:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:06.245 11:58:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.245 11:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.245 11:58:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.245 11:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.245 11:58:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:06.245 11:58:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:06.245 11:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.245 11:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.245 11:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.245 11:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.245 11:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.245 11:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.245 11:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.245 11:58:13 -- accel/accel.sh@42 -- # jq -r . 00:06:06.245 Unsupported workload type: foobar 00:06:06.245 [2024-07-25 11:58:13.483719] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:06.245 accel_perf options: 00:06:06.245 [-h help message] 00:06:06.245 [-q queue depth per core] 00:06:06.245 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.245 [-T number of threads per core 00:06:06.245 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.245 [-t time in seconds] 00:06:06.245 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.245 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.245 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.245 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.245 [-S for crc32c workload, use this seed value (default 0) 00:06:06.245 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.245 [-f for fill workload, use this BYTE value (default 255) 00:06:06.245 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.245 [-y verify result if this switch is on] 00:06:06.246 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.246 Can be used to spread operations across a wider range of memory. 00:06:06.246 11:58:13 -- common/autotest_common.sh@643 -- # es=1 00:06:06.246 11:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.246 11:58:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.246 11:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.246 00:06:06.246 real 0m0.041s 00:06:06.246 user 0m0.025s 00:06:06.246 sys 0m0.016s 00:06:06.246 11:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.246 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.246 ************************************ 00:06:06.246 END TEST accel_wrong_workload 00:06:06.246 ************************************ 00:06:06.246 Error: writing output failed: Broken pipe 00:06:06.246 11:58:13 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.246 11:58:13 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:06.246 11:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.246 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.246 ************************************ 00:06:06.246 START TEST accel_negative_buffers 00:06:06.246 ************************************ 00:06:06.246 11:58:13 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:06.246 11:58:13 -- common/autotest_common.sh@640 -- # local es=0 00:06:06.246 11:58:13 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:06.246 11:58:13 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:06.246 11:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.246 11:58:13 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:06.246 11:58:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:06.246 11:58:13 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:06.246 11:58:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:06.246 11:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.246 11:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.246 11:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.246 11:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.246 11:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.246 11:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.246 11:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.246 11:58:13 -- accel/accel.sh@42 -- # jq -r . 00:06:06.505 -x option must be non-negative. 00:06:06.505 [2024-07-25 11:58:13.572139] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:06.505 accel_perf options: 00:06:06.505 [-h help message] 00:06:06.505 [-q queue depth per core] 00:06:06.505 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:06.505 [-T number of threads per core 00:06:06.505 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:06.505 [-t time in seconds] 00:06:06.505 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:06.505 [ dif_verify, , dif_generate, dif_generate_copy 00:06:06.505 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:06.505 [-l for compress/decompress workloads, name of uncompressed input file 00:06:06.505 [-S for crc32c workload, use this seed value (default 0) 00:06:06.505 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:06.505 [-f for fill workload, use this BYTE value (default 255) 00:06:06.505 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:06.505 [-y verify result if this switch is on] 00:06:06.505 [-a tasks to allocate per core (default: same value as -q)] 00:06:06.505 Can be used to spread operations across a wider range of memory. 00:06:06.505 11:58:13 -- common/autotest_common.sh@643 -- # es=1 00:06:06.505 11:58:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:06.505 11:58:13 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:06.505 11:58:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:06.505 00:06:06.505 real 0m0.042s 00:06:06.505 user 0m0.023s 00:06:06.505 sys 0m0.019s 00:06:06.505 11:58:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:06.505 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.505 ************************************ 00:06:06.505 END TEST accel_negative_buffers 00:06:06.505 ************************************ 00:06:06.505 Error: writing output failed: Broken pipe 00:06:06.505 11:58:13 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:06.505 11:58:13 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:06.505 11:58:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:06.505 11:58:13 -- common/autotest_common.sh@10 -- # set +x 00:06:06.505 ************************************ 00:06:06.505 START TEST accel_crc32c 00:06:06.505 ************************************ 00:06:06.505 11:58:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:06.505 11:58:13 -- accel/accel.sh@16 -- # local accel_opc 00:06:06.505 11:58:13 -- accel/accel.sh@17 -- # local accel_module 00:06:06.505 11:58:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:06.505 11:58:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:06.505 11:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:06.505 11:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:06.505 11:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:06.505 11:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:06.505 11:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:06.505 11:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:06.505 11:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:06:06.505 11:58:13 -- accel/accel.sh@42 -- # jq -r . 00:06:06.505 [2024-07-25 11:58:13.656902] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:06.505 [2024-07-25 11:58:13.656972] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186044 ] 00:06:06.505 [2024-07-25 11:58:13.745146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.764 [2024-07-25 11:58:13.832993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.141 11:58:15 -- accel/accel.sh@18 -- # out=' 00:06:08.141 SPDK Configuration: 00:06:08.141 Core mask: 0x1 00:06:08.141 00:06:08.141 Accel Perf Configuration: 00:06:08.141 Workload Type: crc32c 00:06:08.141 CRC-32C seed: 32 00:06:08.141 Transfer size: 4096 bytes 00:06:08.141 Vector count 1 00:06:08.141 Module: software 00:06:08.141 Queue depth: 32 00:06:08.141 Allocate depth: 32 00:06:08.141 # threads/core: 1 00:06:08.141 Run time: 1 seconds 00:06:08.141 Verify: Yes 00:06:08.141 00:06:08.141 Running for 1 seconds... 00:06:08.141 00:06:08.141 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:08.142 ------------------------------------------------------------------------------------ 00:06:08.142 0,0 577728/s 2256 MiB/s 0 0 00:06:08.142 ==================================================================================== 00:06:08.142 Total 577728/s 2256 MiB/s 0 0' 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:08.142 11:58:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:08.142 11:58:15 -- accel/accel.sh@12 -- # build_accel_config 00:06:08.142 11:58:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:08.142 11:58:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.142 11:58:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.142 11:58:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:08.142 11:58:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:08.142 11:58:15 -- accel/accel.sh@41 -- # local IFS=, 00:06:08.142 11:58:15 -- accel/accel.sh@42 -- # jq -r . 00:06:08.142 [2024-07-25 11:58:15.102504] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:08.142 [2024-07-25 11:58:15.102568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186228 ] 00:06:08.142 [2024-07-25 11:58:15.188388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.142 [2024-07-25 11:58:15.274283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=0x1 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=crc32c 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=32 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=software 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@23 -- # accel_module=software 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=32 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=32 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=1 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val=Yes 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:08.142 11:58:15 -- accel/accel.sh@21 -- # val= 00:06:08.142 11:58:15 -- accel/accel.sh@22 -- # case "$var" in 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # IFS=: 00:06:08.142 11:58:15 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@21 -- # val= 00:06:09.521 11:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # IFS=: 00:06:09.521 11:58:16 -- accel/accel.sh@20 -- # read -r var val 00:06:09.521 11:58:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:09.521 11:58:16 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:09.521 11:58:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:09.521 00:06:09.521 real 0m2.901s 00:06:09.521 user 0m2.577s 00:06:09.521 sys 0m0.305s 00:06:09.521 11:58:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:09.521 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.521 ************************************ 00:06:09.521 END TEST accel_crc32c 00:06:09.521 ************************************ 00:06:09.521 11:58:16 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:09.521 11:58:16 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:09.521 11:58:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:09.521 11:58:16 -- common/autotest_common.sh@10 -- # set +x 00:06:09.521 ************************************ 00:06:09.521 START TEST accel_crc32c_C2 00:06:09.521 ************************************ 00:06:09.521 11:58:16 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:09.521 11:58:16 -- accel/accel.sh@16 -- # local accel_opc 00:06:09.521 11:58:16 -- accel/accel.sh@17 -- # local accel_module 00:06:09.521 11:58:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:09.521 11:58:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:09.521 11:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:06:09.521 11:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:09.521 11:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:09.521 11:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:09.521 11:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:09.521 11:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:09.521 11:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:06:09.521 11:58:16 -- accel/accel.sh@42 -- # jq -r . 00:06:09.521 [2024-07-25 11:58:16.592942] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:09.522 [2024-07-25 11:58:16.593001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186426 ] 00:06:09.522 [2024-07-25 11:58:16.678390] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.522 [2024-07-25 11:58:16.762467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.900 11:58:18 -- accel/accel.sh@18 -- # out=' 00:06:10.900 SPDK Configuration: 00:06:10.900 Core mask: 0x1 00:06:10.900 00:06:10.900 Accel Perf Configuration: 00:06:10.900 Workload Type: crc32c 00:06:10.900 CRC-32C seed: 0 00:06:10.900 Transfer size: 4096 bytes 00:06:10.900 Vector count 2 00:06:10.900 Module: software 00:06:10.900 Queue depth: 32 00:06:10.900 Allocate depth: 32 00:06:10.900 # threads/core: 1 00:06:10.900 Run time: 1 seconds 00:06:10.900 Verify: Yes 00:06:10.900 00:06:10.900 Running for 1 seconds... 00:06:10.900 00:06:10.900 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:10.900 ------------------------------------------------------------------------------------ 00:06:10.900 0,0 459328/s 3588 MiB/s 0 0 00:06:10.900 ==================================================================================== 00:06:10.900 Total 459328/s 1794 MiB/s 0 0' 00:06:10.900 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:10.900 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:10.900 11:58:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:10.900 11:58:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:10.901 11:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:06:10.901 11:58:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:10.901 11:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.901 11:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.901 11:58:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:10.901 11:58:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:10.901 11:58:18 -- accel/accel.sh@41 -- # local IFS=, 00:06:10.901 11:58:18 -- accel/accel.sh@42 -- # jq -r . 00:06:10.901 [2024-07-25 11:58:18.035251] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:10.901 [2024-07-25 11:58:18.035319] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186614 ] 00:06:10.901 [2024-07-25 11:58:18.119734] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.901 [2024-07-25 11:58:18.199059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=0x1 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=crc32c 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=0 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=software 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@23 -- # accel_module=software 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=32 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=32 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=1 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val=Yes 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:11.160 11:58:18 -- accel/accel.sh@21 -- # val= 00:06:11.160 11:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # IFS=: 00:06:11.160 11:58:18 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@21 -- # val= 00:06:12.538 11:58:19 -- accel/accel.sh@22 -- # case "$var" in 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # IFS=: 00:06:12.538 11:58:19 -- accel/accel.sh@20 -- # read -r var val 00:06:12.538 11:58:19 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:12.538 11:58:19 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:12.538 11:58:19 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.538 00:06:12.538 real 0m2.866s 00:06:12.538 user 0m2.540s 00:06:12.538 sys 0m0.315s 00:06:12.538 11:58:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.538 11:58:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.538 ************************************ 00:06:12.538 END TEST accel_crc32c_C2 00:06:12.538 ************************************ 00:06:12.538 11:58:19 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:12.538 11:58:19 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:12.538 11:58:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:12.538 11:58:19 -- common/autotest_common.sh@10 -- # set +x 00:06:12.538 ************************************ 00:06:12.538 START TEST accel_copy 00:06:12.538 ************************************ 00:06:12.538 11:58:19 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:12.538 11:58:19 -- accel/accel.sh@16 -- # local accel_opc 00:06:12.538 11:58:19 -- accel/accel.sh@17 -- # local accel_module 00:06:12.538 11:58:19 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:12.538 11:58:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:12.538 11:58:19 -- accel/accel.sh@12 -- # build_accel_config 00:06:12.538 11:58:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:12.538 11:58:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.538 11:58:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.538 11:58:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:12.538 11:58:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:12.538 11:58:19 -- accel/accel.sh@41 -- # local IFS=, 00:06:12.538 11:58:19 -- accel/accel.sh@42 -- # jq -r . 00:06:12.538 [2024-07-25 11:58:19.497688] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:12.539 [2024-07-25 11:58:19.497747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186810 ] 00:06:12.539 [2024-07-25 11:58:19.583172] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.539 [2024-07-25 11:58:19.667074] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.995 11:58:20 -- accel/accel.sh@18 -- # out=' 00:06:13.995 SPDK Configuration: 00:06:13.995 Core mask: 0x1 00:06:13.995 00:06:13.995 Accel Perf Configuration: 00:06:13.995 Workload Type: copy 00:06:13.995 Transfer size: 4096 bytes 00:06:13.995 Vector count 1 00:06:13.995 Module: software 00:06:13.995 Queue depth: 32 00:06:13.995 Allocate depth: 32 00:06:13.995 # threads/core: 1 00:06:13.995 Run time: 1 seconds 00:06:13.995 Verify: Yes 00:06:13.995 00:06:13.995 Running for 1 seconds... 00:06:13.995 00:06:13.995 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:13.995 ------------------------------------------------------------------------------------ 00:06:13.995 0,0 433824/s 1694 MiB/s 0 0 00:06:13.995 ==================================================================================== 00:06:13.995 Total 433824/s 1694 MiB/s 0 0' 00:06:13.995 11:58:20 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:20 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:13.995 11:58:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:13.995 11:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:06:13.995 11:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:13.995 11:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:13.995 11:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:13.995 11:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:13.995 11:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:13.995 11:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:06:13.995 11:58:20 -- accel/accel.sh@42 -- # jq -r . 00:06:13.995 [2024-07-25 11:58:20.941248] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:13.995 [2024-07-25 11:58:20.941314] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186991 ] 00:06:13.995 [2024-07-25 11:58:21.028119] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.995 [2024-07-25 11:58:21.111908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=0x1 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=copy 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=software 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@23 -- # accel_module=software 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=32 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=32 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=1 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val=Yes 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:13.995 11:58:21 -- accel/accel.sh@21 -- # val= 00:06:13.995 11:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # IFS=: 00:06:13.995 11:58:21 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@21 -- # val= 00:06:15.374 11:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # IFS=: 00:06:15.374 11:58:22 -- accel/accel.sh@20 -- # read -r var val 00:06:15.374 11:58:22 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:15.374 11:58:22 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:15.374 11:58:22 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.374 00:06:15.374 real 0m2.897s 00:06:15.374 user 0m2.559s 00:06:15.374 sys 0m0.319s 00:06:15.374 11:58:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.374 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.374 ************************************ 00:06:15.374 END TEST accel_copy 00:06:15.374 ************************************ 00:06:15.374 11:58:22 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.374 11:58:22 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:15.374 11:58:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:15.374 11:58:22 -- common/autotest_common.sh@10 -- # set +x 00:06:15.374 ************************************ 00:06:15.374 START TEST accel_fill 00:06:15.374 ************************************ 00:06:15.374 11:58:22 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.374 11:58:22 -- accel/accel.sh@16 -- # local accel_opc 00:06:15.374 11:58:22 -- accel/accel.sh@17 -- # local accel_module 00:06:15.374 11:58:22 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.374 11:58:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:15.374 11:58:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:15.374 11:58:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:15.374 11:58:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.374 11:58:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.374 11:58:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:15.374 11:58:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:15.374 11:58:22 -- accel/accel.sh@41 -- # local IFS=, 00:06:15.374 11:58:22 -- accel/accel.sh@42 -- # jq -r . 00:06:15.374 [2024-07-25 11:58:22.434825] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:15.374 [2024-07-25 11:58:22.434887] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187228 ] 00:06:15.374 [2024-07-25 11:58:22.520756] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.374 [2024-07-25 11:58:22.610235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.762 11:58:23 -- accel/accel.sh@18 -- # out=' 00:06:16.762 SPDK Configuration: 00:06:16.762 Core mask: 0x1 00:06:16.762 00:06:16.762 Accel Perf Configuration: 00:06:16.762 Workload Type: fill 00:06:16.762 Fill pattern: 0x80 00:06:16.762 Transfer size: 4096 bytes 00:06:16.762 Vector count 1 00:06:16.762 Module: software 00:06:16.762 Queue depth: 64 00:06:16.762 Allocate depth: 64 00:06:16.762 # threads/core: 1 00:06:16.762 Run time: 1 seconds 00:06:16.762 Verify: Yes 00:06:16.762 00:06:16.762 Running for 1 seconds... 00:06:16.762 00:06:16.762 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:16.762 ------------------------------------------------------------------------------------ 00:06:16.762 0,0 680064/s 2656 MiB/s 0 0 00:06:16.762 ==================================================================================== 00:06:16.762 Total 680064/s 2656 MiB/s 0 0' 00:06:16.762 11:58:23 -- accel/accel.sh@20 -- # IFS=: 00:06:16.762 11:58:23 -- accel/accel.sh@20 -- # read -r var val 00:06:16.762 11:58:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.762 11:58:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:16.762 11:58:23 -- accel/accel.sh@12 -- # build_accel_config 00:06:16.762 11:58:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:16.762 11:58:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:16.762 11:58:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:16.762 11:58:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:16.762 11:58:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:16.762 11:58:23 -- accel/accel.sh@41 -- # local IFS=, 00:06:16.762 11:58:23 -- accel/accel.sh@42 -- # jq -r . 00:06:16.762 [2024-07-25 11:58:23.883737] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:16.762 [2024-07-25 11:58:23.883798] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187456 ] 00:06:16.762 [2024-07-25 11:58:23.972149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.762 [2024-07-25 11:58:24.061762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=0x1 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=fill 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=0x80 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=software 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@23 -- # accel_module=software 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=64 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=64 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=1 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val=Yes 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:17.022 11:58:24 -- accel/accel.sh@21 -- # val= 00:06:17.022 11:58:24 -- accel/accel.sh@22 -- # case "$var" in 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # IFS=: 00:06:17.022 11:58:24 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@21 -- # val= 00:06:18.402 11:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # IFS=: 00:06:18.402 11:58:25 -- accel/accel.sh@20 -- # read -r var val 00:06:18.402 11:58:25 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:18.402 11:58:25 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:18.402 11:58:25 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.402 00:06:18.402 real 0m2.905s 00:06:18.402 user 0m2.559s 00:06:18.402 sys 0m0.322s 00:06:18.402 11:58:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.402 11:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:18.402 ************************************ 00:06:18.402 END TEST accel_fill 00:06:18.402 ************************************ 00:06:18.402 11:58:25 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:18.402 11:58:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:18.402 11:58:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:18.402 11:58:25 -- common/autotest_common.sh@10 -- # set +x 00:06:18.402 ************************************ 00:06:18.402 START TEST accel_copy_crc32c 00:06:18.402 ************************************ 00:06:18.402 11:58:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:18.402 11:58:25 -- accel/accel.sh@16 -- # local accel_opc 00:06:18.402 11:58:25 -- accel/accel.sh@17 -- # local accel_module 00:06:18.402 11:58:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:18.402 11:58:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:18.402 11:58:25 -- accel/accel.sh@12 -- # build_accel_config 00:06:18.402 11:58:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:18.402 11:58:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.402 11:58:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.402 11:58:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:18.402 11:58:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:18.402 11:58:25 -- accel/accel.sh@41 -- # local IFS=, 00:06:18.402 11:58:25 -- accel/accel.sh@42 -- # jq -r . 00:06:18.402 [2024-07-25 11:58:25.385840] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:18.402 [2024-07-25 11:58:25.385904] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187728 ] 00:06:18.402 [2024-07-25 11:58:25.473160] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.402 [2024-07-25 11:58:25.556873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.781 11:58:26 -- accel/accel.sh@18 -- # out=' 00:06:19.781 SPDK Configuration: 00:06:19.781 Core mask: 0x1 00:06:19.781 00:06:19.781 Accel Perf Configuration: 00:06:19.781 Workload Type: copy_crc32c 00:06:19.781 CRC-32C seed: 0 00:06:19.781 Vector size: 4096 bytes 00:06:19.781 Transfer size: 4096 bytes 00:06:19.781 Vector count 1 00:06:19.781 Module: software 00:06:19.781 Queue depth: 32 00:06:19.781 Allocate depth: 32 00:06:19.781 # threads/core: 1 00:06:19.781 Run time: 1 seconds 00:06:19.781 Verify: Yes 00:06:19.781 00:06:19.781 Running for 1 seconds... 00:06:19.781 00:06:19.781 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:19.781 ------------------------------------------------------------------------------------ 00:06:19.781 0,0 325280/s 1270 MiB/s 0 0 00:06:19.781 ==================================================================================== 00:06:19.781 Total 325280/s 1270 MiB/s 0 0' 00:06:19.781 11:58:26 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:19.781 11:58:26 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:19.781 11:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:06:19.781 11:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:19.781 11:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.781 11:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.781 11:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:19.781 11:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:19.781 11:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:06:19.781 11:58:26 -- accel/accel.sh@42 -- # jq -r . 00:06:19.781 [2024-07-25 11:58:26.829869] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:19.781 [2024-07-25 11:58:26.829930] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187919 ] 00:06:19.781 [2024-07-25 11:58:26.915436] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.781 [2024-07-25 11:58:26.997630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=0x1 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=0 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=software 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@23 -- # accel_module=software 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=32 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=32 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=1 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val=Yes 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:19.781 11:58:27 -- accel/accel.sh@21 -- # val= 00:06:19.781 11:58:27 -- accel/accel.sh@22 -- # case "$var" in 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # IFS=: 00:06:19.781 11:58:27 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@21 -- # val= 00:06:21.160 11:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # IFS=: 00:06:21.160 11:58:28 -- accel/accel.sh@20 -- # read -r var val 00:06:21.160 11:58:28 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:21.160 11:58:28 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:21.160 11:58:28 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.160 00:06:21.160 real 0m2.882s 00:06:21.160 user 0m2.557s 00:06:21.160 sys 0m0.306s 00:06:21.160 11:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.160 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 ************************************ 00:06:21.160 END TEST accel_copy_crc32c 00:06:21.160 ************************************ 00:06:21.160 11:58:28 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:21.160 11:58:28 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:21.160 11:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:21.160 11:58:28 -- common/autotest_common.sh@10 -- # set +x 00:06:21.160 ************************************ 00:06:21.160 START TEST accel_copy_crc32c_C2 00:06:21.160 ************************************ 00:06:21.160 11:58:28 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:21.160 11:58:28 -- accel/accel.sh@16 -- # local accel_opc 00:06:21.160 11:58:28 -- accel/accel.sh@17 -- # local accel_module 00:06:21.160 11:58:28 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:21.160 11:58:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:21.160 11:58:28 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.160 11:58:28 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:21.160 11:58:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.160 11:58:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.160 11:58:28 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:21.160 11:58:28 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:21.160 11:58:28 -- accel/accel.sh@41 -- # local IFS=, 00:06:21.160 11:58:28 -- accel/accel.sh@42 -- # jq -r . 00:06:21.160 [2024-07-25 11:58:28.304585] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:21.160 [2024-07-25 11:58:28.304646] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188120 ] 00:06:21.160 [2024-07-25 11:58:28.390893] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.419 [2024-07-25 11:58:28.474695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.797 11:58:29 -- accel/accel.sh@18 -- # out=' 00:06:22.797 SPDK Configuration: 00:06:22.797 Core mask: 0x1 00:06:22.797 00:06:22.797 Accel Perf Configuration: 00:06:22.797 Workload Type: copy_crc32c 00:06:22.797 CRC-32C seed: 0 00:06:22.797 Vector size: 4096 bytes 00:06:22.797 Transfer size: 8192 bytes 00:06:22.797 Vector count 2 00:06:22.797 Module: software 00:06:22.797 Queue depth: 32 00:06:22.797 Allocate depth: 32 00:06:22.797 # threads/core: 1 00:06:22.797 Run time: 1 seconds 00:06:22.797 Verify: Yes 00:06:22.797 00:06:22.797 Running for 1 seconds... 00:06:22.797 00:06:22.797 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:22.797 ------------------------------------------------------------------------------------ 00:06:22.797 0,0 238432/s 1862 MiB/s 0 0 00:06:22.797 ==================================================================================== 00:06:22.797 Total 238432/s 931 MiB/s 0 0' 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:22.797 11:58:29 -- accel/accel.sh@12 -- # build_accel_config 00:06:22.797 11:58:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:22.797 11:58:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.797 11:58:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.797 11:58:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:22.797 11:58:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:22.797 11:58:29 -- accel/accel.sh@41 -- # local IFS=, 00:06:22.797 11:58:29 -- accel/accel.sh@42 -- # jq -r . 00:06:22.797 [2024-07-25 11:58:29.730958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:22.797 [2024-07-25 11:58:29.731007] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188301 ] 00:06:22.797 [2024-07-25 11:58:29.815673] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.797 [2024-07-25 11:58:29.897950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val=0x1 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val=0 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val='8192 bytes' 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.797 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.797 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.797 11:58:29 -- accel/accel.sh@21 -- # val=software 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@23 -- # accel_module=software 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val=32 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val=32 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val=1 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val=Yes 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:22.798 11:58:29 -- accel/accel.sh@21 -- # val= 00:06:22.798 11:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # IFS=: 00:06:22.798 11:58:29 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@21 -- # val= 00:06:24.175 11:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # IFS=: 00:06:24.175 11:58:31 -- accel/accel.sh@20 -- # read -r var val 00:06:24.175 11:58:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:24.175 11:58:31 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:24.175 11:58:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.175 00:06:24.175 real 0m2.864s 00:06:24.175 user 0m2.533s 00:06:24.175 sys 0m0.309s 00:06:24.175 11:58:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.175 11:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 ************************************ 00:06:24.175 END TEST accel_copy_crc32c_C2 00:06:24.175 ************************************ 00:06:24.175 11:58:31 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:24.175 11:58:31 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:24.175 11:58:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.175 11:58:31 -- common/autotest_common.sh@10 -- # set +x 00:06:24.175 ************************************ 00:06:24.175 START TEST accel_dualcast 00:06:24.175 ************************************ 00:06:24.175 11:58:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:06:24.175 11:58:31 -- accel/accel.sh@16 -- # local accel_opc 00:06:24.175 11:58:31 -- accel/accel.sh@17 -- # local accel_module 00:06:24.175 11:58:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:06:24.175 11:58:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:24.175 11:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:24.175 11:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:24.175 11:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.175 11:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.175 11:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:24.175 11:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:24.175 11:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:06:24.175 11:58:31 -- accel/accel.sh@42 -- # jq -r . 00:06:24.175 [2024-07-25 11:58:31.207814] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:24.175 [2024-07-25 11:58:31.207876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188494 ] 00:06:24.175 [2024-07-25 11:58:31.293728] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.175 [2024-07-25 11:58:31.375236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.553 11:58:32 -- accel/accel.sh@18 -- # out=' 00:06:25.553 SPDK Configuration: 00:06:25.553 Core mask: 0x1 00:06:25.553 00:06:25.553 Accel Perf Configuration: 00:06:25.553 Workload Type: dualcast 00:06:25.553 Transfer size: 4096 bytes 00:06:25.553 Vector count 1 00:06:25.553 Module: software 00:06:25.553 Queue depth: 32 00:06:25.553 Allocate depth: 32 00:06:25.553 # threads/core: 1 00:06:25.553 Run time: 1 seconds 00:06:25.553 Verify: Yes 00:06:25.553 00:06:25.553 Running for 1 seconds... 00:06:25.553 00:06:25.553 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:25.553 ------------------------------------------------------------------------------------ 00:06:25.553 0,0 511232/s 1997 MiB/s 0 0 00:06:25.554 ==================================================================================== 00:06:25.554 Total 511232/s 1997 MiB/s 0 0' 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:25.554 11:58:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:25.554 11:58:32 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:25.554 11:58:32 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.554 11:58:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.554 11:58:32 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:25.554 11:58:32 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:25.554 11:58:32 -- accel/accel.sh@41 -- # local IFS=, 00:06:25.554 11:58:32 -- accel/accel.sh@42 -- # jq -r . 00:06:25.554 [2024-07-25 11:58:32.615331] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:25.554 [2024-07-25 11:58:32.615382] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188684 ] 00:06:25.554 [2024-07-25 11:58:32.701053] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.554 [2024-07-25 11:58:32.781343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=0x1 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=dualcast 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=software 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@23 -- # accel_module=software 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=32 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=32 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=1 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val=Yes 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:25.554 11:58:32 -- accel/accel.sh@21 -- # val= 00:06:25.554 11:58:32 -- accel/accel.sh@22 -- # case "$var" in 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # IFS=: 00:06:25.554 11:58:32 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@21 -- # val= 00:06:26.933 11:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # IFS=: 00:06:26.933 11:58:34 -- accel/accel.sh@20 -- # read -r var val 00:06:26.933 11:58:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:26.933 11:58:34 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:06:26.933 11:58:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.933 00:06:26.933 real 0m2.834s 00:06:26.933 user 0m2.532s 00:06:26.933 sys 0m0.283s 00:06:26.933 11:58:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.933 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.933 ************************************ 00:06:26.933 END TEST accel_dualcast 00:06:26.933 ************************************ 00:06:26.933 11:58:34 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:26.933 11:58:34 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:26.933 11:58:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.933 11:58:34 -- common/autotest_common.sh@10 -- # set +x 00:06:26.933 ************************************ 00:06:26.933 START TEST accel_compare 00:06:26.933 ************************************ 00:06:26.933 11:58:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:06:26.933 11:58:34 -- accel/accel.sh@16 -- # local accel_opc 00:06:26.933 11:58:34 -- accel/accel.sh@17 -- # local accel_module 00:06:26.933 11:58:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:06:26.933 11:58:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:26.933 11:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:06:26.933 11:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:26.933 11:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.933 11:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.933 11:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:26.933 11:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:26.933 11:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:06:26.933 11:58:34 -- accel/accel.sh@42 -- # jq -r . 00:06:26.933 [2024-07-25 11:58:34.078344] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:26.933 [2024-07-25 11:58:34.078401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1188881 ] 00:06:26.933 [2024-07-25 11:58:34.164508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.192 [2024-07-25 11:58:34.249511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.567 11:58:35 -- accel/accel.sh@18 -- # out=' 00:06:28.567 SPDK Configuration: 00:06:28.567 Core mask: 0x1 00:06:28.567 00:06:28.567 Accel Perf Configuration: 00:06:28.567 Workload Type: compare 00:06:28.567 Transfer size: 4096 bytes 00:06:28.567 Vector count 1 00:06:28.567 Module: software 00:06:28.567 Queue depth: 32 00:06:28.567 Allocate depth: 32 00:06:28.567 # threads/core: 1 00:06:28.567 Run time: 1 seconds 00:06:28.567 Verify: Yes 00:06:28.567 00:06:28.567 Running for 1 seconds... 00:06:28.567 00:06:28.567 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:28.567 ------------------------------------------------------------------------------------ 00:06:28.567 0,0 613536/s 2396 MiB/s 0 0 00:06:28.567 ==================================================================================== 00:06:28.567 Total 613536/s 2396 MiB/s 0 0' 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:28.567 11:58:35 -- accel/accel.sh@12 -- # build_accel_config 00:06:28.567 11:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:28.567 11:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.567 11:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.567 11:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:28.567 11:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:28.567 11:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:06:28.567 11:58:35 -- accel/accel.sh@42 -- # jq -r . 00:06:28.567 [2024-07-25 11:58:35.500709] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:28.567 [2024-07-25 11:58:35.500760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189067 ] 00:06:28.567 [2024-07-25 11:58:35.587403] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:28.567 [2024-07-25 11:58:35.669398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=0x1 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=compare 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@24 -- # accel_opc=compare 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=software 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@23 -- # accel_module=software 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=32 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=32 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=1 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val=Yes 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:28.567 11:58:35 -- accel/accel.sh@21 -- # val= 00:06:28.567 11:58:35 -- accel/accel.sh@22 -- # case "$var" in 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # IFS=: 00:06:28.567 11:58:35 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@21 -- # val= 00:06:29.945 11:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # IFS=: 00:06:29.945 11:58:36 -- accel/accel.sh@20 -- # read -r var val 00:06:29.945 11:58:36 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:29.945 11:58:36 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:06:29.945 11:58:36 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.945 00:06:29.945 real 0m2.860s 00:06:29.945 user 0m2.521s 00:06:29.945 sys 0m0.323s 00:06:29.945 11:58:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.945 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.945 ************************************ 00:06:29.945 END TEST accel_compare 00:06:29.945 ************************************ 00:06:29.945 11:58:36 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:29.945 11:58:36 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:29.945 11:58:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:29.945 11:58:36 -- common/autotest_common.sh@10 -- # set +x 00:06:29.945 ************************************ 00:06:29.945 START TEST accel_xor 00:06:29.945 ************************************ 00:06:29.945 11:58:36 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:06:29.945 11:58:36 -- accel/accel.sh@16 -- # local accel_opc 00:06:29.945 11:58:36 -- accel/accel.sh@17 -- # local accel_module 00:06:29.945 11:58:36 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:06:29.945 11:58:36 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:29.945 11:58:36 -- accel/accel.sh@12 -- # build_accel_config 00:06:29.945 11:58:36 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:29.945 11:58:36 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.945 11:58:36 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.945 11:58:36 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:29.945 11:58:36 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:29.945 11:58:36 -- accel/accel.sh@41 -- # local IFS=, 00:06:29.945 11:58:36 -- accel/accel.sh@42 -- # jq -r . 00:06:29.945 [2024-07-25 11:58:36.985590] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:29.945 [2024-07-25 11:58:36.985663] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189260 ] 00:06:29.945 [2024-07-25 11:58:37.073195] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.945 [2024-07-25 11:58:37.157319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.322 11:58:38 -- accel/accel.sh@18 -- # out=' 00:06:31.322 SPDK Configuration: 00:06:31.322 Core mask: 0x1 00:06:31.322 00:06:31.322 Accel Perf Configuration: 00:06:31.322 Workload Type: xor 00:06:31.322 Source buffers: 2 00:06:31.322 Transfer size: 4096 bytes 00:06:31.322 Vector count 1 00:06:31.322 Module: software 00:06:31.322 Queue depth: 32 00:06:31.322 Allocate depth: 32 00:06:31.322 # threads/core: 1 00:06:31.322 Run time: 1 seconds 00:06:31.322 Verify: Yes 00:06:31.322 00:06:31.322 Running for 1 seconds... 00:06:31.322 00:06:31.322 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:31.322 ------------------------------------------------------------------------------------ 00:06:31.322 0,0 484832/s 1893 MiB/s 0 0 00:06:31.322 ==================================================================================== 00:06:31.322 Total 484832/s 1893 MiB/s 0 0' 00:06:31.322 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.322 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.322 11:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:31.322 11:58:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:31.322 11:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:06:31.322 11:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:31.322 11:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.322 11:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.322 11:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:31.322 11:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:31.322 11:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:06:31.322 11:58:38 -- accel/accel.sh@42 -- # jq -r . 00:06:31.322 [2024-07-25 11:58:38.413946] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:31.322 [2024-07-25 11:58:38.413997] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189448 ] 00:06:31.322 [2024-07-25 11:58:38.502024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.322 [2024-07-25 11:58:38.584902] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=0x1 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=xor 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=2 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=software 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=32 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=32 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=1 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val=Yes 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:31.581 11:58:38 -- accel/accel.sh@21 -- # val= 00:06:31.581 11:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # IFS=: 00:06:31.581 11:58:38 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@21 -- # val= 00:06:32.516 11:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # IFS=: 00:06:32.516 11:58:39 -- accel/accel.sh@20 -- # read -r var val 00:06:32.516 11:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:32.516 11:58:39 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:32.516 11:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.516 00:06:32.516 real 0m2.865s 00:06:32.516 user 0m2.537s 00:06:32.516 sys 0m0.307s 00:06:32.516 11:58:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.516 11:58:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.516 ************************************ 00:06:32.516 END TEST accel_xor 00:06:32.516 ************************************ 00:06:32.775 11:58:39 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:32.775 11:58:39 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:32.775 11:58:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:32.775 11:58:39 -- common/autotest_common.sh@10 -- # set +x 00:06:32.775 ************************************ 00:06:32.775 START TEST accel_xor 00:06:32.775 ************************************ 00:06:32.775 11:58:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:06:32.775 11:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:06:32.775 11:58:39 -- accel/accel.sh@17 -- # local accel_module 00:06:32.775 11:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:06:32.775 11:58:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:32.775 11:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:06:32.775 11:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:32.775 11:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.775 11:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.775 11:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:32.775 11:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:32.775 11:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:06:32.775 11:58:39 -- accel/accel.sh@42 -- # jq -r . 00:06:32.775 [2024-07-25 11:58:39.890139] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:32.775 [2024-07-25 11:58:39.890199] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189647 ] 00:06:32.775 [2024-07-25 11:58:39.978669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:32.775 [2024-07-25 11:58:40.077932] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.158 11:58:41 -- accel/accel.sh@18 -- # out=' 00:06:34.158 SPDK Configuration: 00:06:34.158 Core mask: 0x1 00:06:34.158 00:06:34.158 Accel Perf Configuration: 00:06:34.158 Workload Type: xor 00:06:34.158 Source buffers: 3 00:06:34.158 Transfer size: 4096 bytes 00:06:34.158 Vector count 1 00:06:34.158 Module: software 00:06:34.158 Queue depth: 32 00:06:34.158 Allocate depth: 32 00:06:34.158 # threads/core: 1 00:06:34.158 Run time: 1 seconds 00:06:34.158 Verify: Yes 00:06:34.158 00:06:34.158 Running for 1 seconds... 00:06:34.158 00:06:34.158 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:34.158 ------------------------------------------------------------------------------------ 00:06:34.158 0,0 469824/s 1835 MiB/s 0 0 00:06:34.158 ==================================================================================== 00:06:34.158 Total 469824/s 1835 MiB/s 0 0' 00:06:34.158 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.158 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.158 11:58:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:34.158 11:58:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:34.158 11:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:34.158 11:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:34.158 11:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:34.158 11:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:34.158 11:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:34.158 11:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:34.158 11:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:06:34.158 11:58:41 -- accel/accel.sh@42 -- # jq -r . 00:06:34.158 [2024-07-25 11:58:41.354431] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:34.158 [2024-07-25 11:58:41.354496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1189852 ] 00:06:34.158 [2024-07-25 11:58:41.440165] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.418 [2024-07-25 11:58:41.524736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=0x1 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=xor 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@24 -- # accel_opc=xor 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=3 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=software 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=32 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=32 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=1 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val=Yes 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:34.418 11:58:41 -- accel/accel.sh@21 -- # val= 00:06:34.418 11:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # IFS=: 00:06:34.418 11:58:41 -- accel/accel.sh@20 -- # read -r var val 00:06:35.837 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@21 -- # val= 00:06:35.838 11:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # IFS=: 00:06:35.838 11:58:42 -- accel/accel.sh@20 -- # read -r var val 00:06:35.838 11:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:35.838 11:58:42 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:06:35.838 11:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:35.838 00:06:35.838 real 0m2.923s 00:06:35.838 user 0m2.570s 00:06:35.838 sys 0m0.312s 00:06:35.838 11:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.838 11:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.838 ************************************ 00:06:35.838 END TEST accel_xor 00:06:35.838 ************************************ 00:06:35.838 11:58:42 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:35.838 11:58:42 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:35.838 11:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.838 11:58:42 -- common/autotest_common.sh@10 -- # set +x 00:06:35.838 ************************************ 00:06:35.838 START TEST accel_dif_verify 00:06:35.838 ************************************ 00:06:35.838 11:58:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:06:35.838 11:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:06:35.838 11:58:42 -- accel/accel.sh@17 -- # local accel_module 00:06:35.838 11:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:06:35.838 11:58:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:35.838 11:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:06:35.838 11:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:35.838 11:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:35.838 11:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:35.838 11:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:35.838 11:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:35.838 11:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:06:35.838 11:58:42 -- accel/accel.sh@42 -- # jq -r . 00:06:35.838 [2024-07-25 11:58:42.852442] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:35.838 [2024-07-25 11:58:42.852501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190119 ] 00:06:35.838 [2024-07-25 11:58:42.944325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.838 [2024-07-25 11:58:43.028717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.215 11:58:44 -- accel/accel.sh@18 -- # out=' 00:06:37.215 SPDK Configuration: 00:06:37.215 Core mask: 0x1 00:06:37.215 00:06:37.215 Accel Perf Configuration: 00:06:37.215 Workload Type: dif_verify 00:06:37.215 Vector size: 4096 bytes 00:06:37.215 Transfer size: 4096 bytes 00:06:37.215 Block size: 512 bytes 00:06:37.215 Metadata size: 8 bytes 00:06:37.215 Vector count 1 00:06:37.215 Module: software 00:06:37.215 Queue depth: 32 00:06:37.215 Allocate depth: 32 00:06:37.215 # threads/core: 1 00:06:37.215 Run time: 1 seconds 00:06:37.215 Verify: No 00:06:37.215 00:06:37.215 Running for 1 seconds... 00:06:37.215 00:06:37.215 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:37.215 ------------------------------------------------------------------------------------ 00:06:37.215 0,0 134240/s 532 MiB/s 0 0 00:06:37.215 ==================================================================================== 00:06:37.215 Total 134240/s 524 MiB/s 0 0' 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.215 11:58:44 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.215 11:58:44 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:37.215 11:58:44 -- accel/accel.sh@12 -- # build_accel_config 00:06:37.215 11:58:44 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:37.215 11:58:44 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:37.215 11:58:44 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:37.215 11:58:44 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:37.215 11:58:44 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:37.215 11:58:44 -- accel/accel.sh@41 -- # local IFS=, 00:06:37.215 11:58:44 -- accel/accel.sh@42 -- # jq -r . 00:06:37.215 [2024-07-25 11:58:44.283458] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:37.215 [2024-07-25 11:58:44.283507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190364 ] 00:06:37.215 [2024-07-25 11:58:44.368493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.215 [2024-07-25 11:58:44.451032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.215 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.215 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.215 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.215 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.215 11:58:44 -- accel/accel.sh@21 -- # val=0x1 00:06:37.215 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.215 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.215 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.215 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=dif_verify 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=software 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@23 -- # accel_module=software 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=32 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=32 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=1 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val=No 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:37.216 11:58:44 -- accel/accel.sh@21 -- # val= 00:06:37.216 11:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # IFS=: 00:06:37.216 11:58:44 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@21 -- # val= 00:06:38.595 11:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # IFS=: 00:06:38.595 11:58:45 -- accel/accel.sh@20 -- # read -r var val 00:06:38.595 11:58:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:38.595 11:58:45 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:06:38.595 11:58:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:38.595 00:06:38.595 real 0m2.872s 00:06:38.595 user 0m2.566s 00:06:38.595 sys 0m0.290s 00:06:38.595 11:58:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.595 11:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:38.595 ************************************ 00:06:38.595 END TEST accel_dif_verify 00:06:38.595 ************************************ 00:06:38.595 11:58:45 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:38.595 11:58:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:38.595 11:58:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:38.595 11:58:45 -- common/autotest_common.sh@10 -- # set +x 00:06:38.595 ************************************ 00:06:38.595 START TEST accel_dif_generate 00:06:38.595 ************************************ 00:06:38.595 11:58:45 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:06:38.595 11:58:45 -- accel/accel.sh@16 -- # local accel_opc 00:06:38.595 11:58:45 -- accel/accel.sh@17 -- # local accel_module 00:06:38.595 11:58:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:06:38.595 11:58:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:38.595 11:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:06:38.595 11:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:38.595 11:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.595 11:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.595 11:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:38.595 11:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:38.595 11:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:06:38.595 11:58:45 -- accel/accel.sh@42 -- # jq -r . 00:06:38.595 [2024-07-25 11:58:45.767611] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:38.595 [2024-07-25 11:58:45.767685] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190578 ] 00:06:38.595 [2024-07-25 11:58:45.853573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.855 [2024-07-25 11:58:45.934867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.233 11:58:47 -- accel/accel.sh@18 -- # out=' 00:06:40.233 SPDK Configuration: 00:06:40.233 Core mask: 0x1 00:06:40.233 00:06:40.233 Accel Perf Configuration: 00:06:40.233 Workload Type: dif_generate 00:06:40.233 Vector size: 4096 bytes 00:06:40.233 Transfer size: 4096 bytes 00:06:40.233 Block size: 512 bytes 00:06:40.233 Metadata size: 8 bytes 00:06:40.233 Vector count 1 00:06:40.233 Module: software 00:06:40.233 Queue depth: 32 00:06:40.233 Allocate depth: 32 00:06:40.233 # threads/core: 1 00:06:40.233 Run time: 1 seconds 00:06:40.233 Verify: No 00:06:40.233 00:06:40.233 Running for 1 seconds... 00:06:40.233 00:06:40.233 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:40.233 ------------------------------------------------------------------------------------ 00:06:40.233 0,0 161536/s 640 MiB/s 0 0 00:06:40.233 ==================================================================================== 00:06:40.233 Total 161536/s 631 MiB/s 0 0' 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:40.233 11:58:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:40.233 11:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:06:40.233 11:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:40.233 11:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.233 11:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.233 11:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:40.233 11:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:40.233 11:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:06:40.233 11:58:47 -- accel/accel.sh@42 -- # jq -r . 00:06:40.233 [2024-07-25 11:58:47.177394] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:40.233 [2024-07-25 11:58:47.177443] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190759 ] 00:06:40.233 [2024-07-25 11:58:47.263501] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.233 [2024-07-25 11:58:47.345738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=0x1 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=dif_generate 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val='512 bytes' 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val='8 bytes' 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=software 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@23 -- # accel_module=software 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=32 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=32 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=1 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val=No 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:40.233 11:58:47 -- accel/accel.sh@21 -- # val= 00:06:40.233 11:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # IFS=: 00:06:40.233 11:58:47 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@21 -- # val= 00:06:41.611 11:58:48 -- accel/accel.sh@22 -- # case "$var" in 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # IFS=: 00:06:41.611 11:58:48 -- accel/accel.sh@20 -- # read -r var val 00:06:41.611 11:58:48 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:41.611 11:58:48 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:06:41.611 11:58:48 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.611 00:06:41.611 real 0m2.846s 00:06:41.611 user 0m2.530s 00:06:41.611 sys 0m0.299s 00:06:41.611 11:58:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.611 11:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.611 ************************************ 00:06:41.611 END TEST accel_dif_generate 00:06:41.611 ************************************ 00:06:41.611 11:58:48 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:41.611 11:58:48 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:06:41.611 11:58:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.611 11:58:48 -- common/autotest_common.sh@10 -- # set +x 00:06:41.611 ************************************ 00:06:41.611 START TEST accel_dif_generate_copy 00:06:41.611 ************************************ 00:06:41.611 11:58:48 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:06:41.611 11:58:48 -- accel/accel.sh@16 -- # local accel_opc 00:06:41.611 11:58:48 -- accel/accel.sh@17 -- # local accel_module 00:06:41.611 11:58:48 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:06:41.611 11:58:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:41.611 11:58:48 -- accel/accel.sh@12 -- # build_accel_config 00:06:41.611 11:58:48 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:41.611 11:58:48 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:41.611 11:58:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:41.611 11:58:48 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:41.611 11:58:48 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:41.611 11:58:48 -- accel/accel.sh@41 -- # local IFS=, 00:06:41.611 11:58:48 -- accel/accel.sh@42 -- # jq -r . 00:06:41.611 [2024-07-25 11:58:48.653751] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:41.611 [2024-07-25 11:58:48.653812] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190957 ] 00:06:41.611 [2024-07-25 11:58:48.740193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.611 [2024-07-25 11:58:48.823526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.989 11:58:50 -- accel/accel.sh@18 -- # out=' 00:06:42.989 SPDK Configuration: 00:06:42.989 Core mask: 0x1 00:06:42.989 00:06:42.990 Accel Perf Configuration: 00:06:42.990 Workload Type: dif_generate_copy 00:06:42.990 Vector size: 4096 bytes 00:06:42.990 Transfer size: 4096 bytes 00:06:42.990 Vector count 1 00:06:42.990 Module: software 00:06:42.990 Queue depth: 32 00:06:42.990 Allocate depth: 32 00:06:42.990 # threads/core: 1 00:06:42.990 Run time: 1 seconds 00:06:42.990 Verify: No 00:06:42.990 00:06:42.990 Running for 1 seconds... 00:06:42.990 00:06:42.990 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:42.990 ------------------------------------------------------------------------------------ 00:06:42.990 0,0 124672/s 494 MiB/s 0 0 00:06:42.990 ==================================================================================== 00:06:42.990 Total 124672/s 487 MiB/s 0 0' 00:06:42.990 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:42.990 11:58:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:42.990 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:42.990 11:58:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:42.990 11:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.990 11:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:42.990 11:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.990 11:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.990 11:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:42.990 11:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:42.990 11:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:42.990 11:58:50 -- accel/accel.sh@42 -- # jq -r . 00:06:42.990 [2024-07-25 11:58:50.088756] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:42.990 [2024-07-25 11:58:50.088806] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191141 ] 00:06:42.990 [2024-07-25 11:58:50.177016] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.990 [2024-07-25 11:58:50.263400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.248 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.248 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.248 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.248 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.248 11:58:50 -- accel/accel.sh@21 -- # val=0x1 00:06:43.248 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.248 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.248 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.248 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.248 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.248 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=software 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@23 -- # accel_module=software 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=32 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=32 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=1 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val=No 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:43.249 11:58:50 -- accel/accel.sh@21 -- # val= 00:06:43.249 11:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # IFS=: 00:06:43.249 11:58:50 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.627 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.627 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.627 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.627 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.627 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.627 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.628 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.628 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.628 11:58:51 -- accel/accel.sh@21 -- # val= 00:06:44.628 11:58:51 -- accel/accel.sh@22 -- # case "$var" in 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # IFS=: 00:06:44.628 11:58:51 -- accel/accel.sh@20 -- # read -r var val 00:06:44.628 11:58:51 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:44.628 11:58:51 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:06:44.628 11:58:51 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.628 00:06:44.628 real 0m2.886s 00:06:44.628 user 0m2.560s 00:06:44.628 sys 0m0.307s 00:06:44.628 11:58:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.628 11:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.628 ************************************ 00:06:44.628 END TEST accel_dif_generate_copy 00:06:44.628 ************************************ 00:06:44.628 11:58:51 -- accel/accel.sh@107 -- # [[ y == y ]] 00:06:44.628 11:58:51 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:44.628 11:58:51 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:44.628 11:58:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.628 11:58:51 -- common/autotest_common.sh@10 -- # set +x 00:06:44.628 ************************************ 00:06:44.628 START TEST accel_comp 00:06:44.628 ************************************ 00:06:44.628 11:58:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:44.628 11:58:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:44.628 11:58:51 -- accel/accel.sh@17 -- # local accel_module 00:06:44.628 11:58:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:44.628 11:58:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:44.628 11:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.628 11:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.628 11:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.628 11:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.628 11:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.628 11:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.628 11:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.628 11:58:51 -- accel/accel.sh@42 -- # jq -r . 00:06:44.628 [2024-07-25 11:58:51.577849] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:44.628 [2024-07-25 11:58:51.577911] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191334 ] 00:06:44.628 [2024-07-25 11:58:51.662751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.628 [2024-07-25 11:58:51.746040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.008 11:58:52 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:46.008 00:06:46.008 SPDK Configuration: 00:06:46.008 Core mask: 0x1 00:06:46.008 00:06:46.008 Accel Perf Configuration: 00:06:46.008 Workload Type: compress 00:06:46.008 Transfer size: 4096 bytes 00:06:46.008 Vector count 1 00:06:46.008 Module: software 00:06:46.008 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:46.008 Queue depth: 32 00:06:46.008 Allocate depth: 32 00:06:46.008 # threads/core: 1 00:06:46.008 Run time: 1 seconds 00:06:46.008 Verify: No 00:06:46.008 00:06:46.008 Running for 1 seconds... 00:06:46.008 00:06:46.008 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:46.008 ------------------------------------------------------------------------------------ 00:06:46.008 0,0 64800/s 270 MiB/s 0 0 00:06:46.008 ==================================================================================== 00:06:46.008 Total 64800/s 253 MiB/s 0 0' 00:06:46.008 11:58:52 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:52 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:46.008 11:58:52 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:46.008 11:58:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:46.008 11:58:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:46.008 11:58:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.008 11:58:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.008 11:58:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:46.008 11:58:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:46.008 11:58:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:46.008 11:58:52 -- accel/accel.sh@42 -- # jq -r . 00:06:46.008 [2024-07-25 11:58:53.007986] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:46.008 [2024-07-25 11:58:53.008037] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191521 ] 00:06:46.008 [2024-07-25 11:58:53.094109] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.008 [2024-07-25 11:58:53.173750] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=0x1 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=compress 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@24 -- # accel_opc=compress 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=software 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=32 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=32 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=1 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val=No 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:46.008 11:58:53 -- accel/accel.sh@21 -- # val= 00:06:46.008 11:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # IFS=: 00:06:46.008 11:58:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.391 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.391 11:58:54 -- accel/accel.sh@21 -- # val= 00:06:47.391 11:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.392 11:58:54 -- accel/accel.sh@20 -- # IFS=: 00:06:47.392 11:58:54 -- accel/accel.sh@20 -- # read -r var val 00:06:47.392 11:58:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:47.392 11:58:54 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:06:47.392 11:58:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.392 00:06:47.392 real 0m2.860s 00:06:47.392 user 0m2.534s 00:06:47.392 sys 0m0.300s 00:06:47.392 11:58:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.392 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.392 ************************************ 00:06:47.392 END TEST accel_comp 00:06:47.392 ************************************ 00:06:47.392 11:58:54 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:47.392 11:58:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:47.392 11:58:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:47.392 11:58:54 -- common/autotest_common.sh@10 -- # set +x 00:06:47.392 ************************************ 00:06:47.392 START TEST accel_decomp 00:06:47.392 ************************************ 00:06:47.392 11:58:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:47.392 11:58:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.392 11:58:54 -- accel/accel.sh@17 -- # local accel_module 00:06:47.392 11:58:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:47.392 11:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:47.392 11:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.392 11:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.392 11:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.392 11:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.392 11:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.392 11:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.392 11:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.392 11:58:54 -- accel/accel.sh@42 -- # jq -r . 00:06:47.392 [2024-07-25 11:58:54.474324] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:47.392 [2024-07-25 11:58:54.474385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191721 ] 00:06:47.392 [2024-07-25 11:58:54.558332] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.392 [2024-07-25 11:58:54.640637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.769 11:58:55 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:48.769 00:06:48.769 SPDK Configuration: 00:06:48.769 Core mask: 0x1 00:06:48.769 00:06:48.769 Accel Perf Configuration: 00:06:48.769 Workload Type: decompress 00:06:48.769 Transfer size: 4096 bytes 00:06:48.769 Vector count 1 00:06:48.769 Module: software 00:06:48.769 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:48.769 Queue depth: 32 00:06:48.769 Allocate depth: 32 00:06:48.769 # threads/core: 1 00:06:48.769 Run time: 1 seconds 00:06:48.769 Verify: Yes 00:06:48.769 00:06:48.769 Running for 1 seconds... 00:06:48.769 00:06:48.769 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:48.769 ------------------------------------------------------------------------------------ 00:06:48.769 0,0 86944/s 160 MiB/s 0 0 00:06:48.769 ==================================================================================== 00:06:48.769 Total 86944/s 339 MiB/s 0 0' 00:06:48.769 11:58:55 -- accel/accel.sh@20 -- # IFS=: 00:06:48.769 11:58:55 -- accel/accel.sh@20 -- # read -r var val 00:06:48.769 11:58:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:48.769 11:58:55 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:06:48.769 11:58:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.769 11:58:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.769 11:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.769 11:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.769 11:58:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.769 11:58:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.769 11:58:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.769 11:58:55 -- accel/accel.sh@42 -- # jq -r . 00:06:48.769 [2024-07-25 11:58:55.894132] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:48.769 [2024-07-25 11:58:55.894182] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191907 ] 00:06:48.769 [2024-07-25 11:58:55.979428] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.769 [2024-07-25 11:58:56.061291] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=0x1 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=decompress 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=software 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@23 -- # accel_module=software 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=32 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=32 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=1 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val=Yes 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:49.029 11:58:56 -- accel/accel.sh@21 -- # val= 00:06:49.029 11:58:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # IFS=: 00:06:49.029 11:58:56 -- accel/accel.sh@20 -- # read -r var val 00:06:50.406 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@21 -- # val= 00:06:50.407 11:58:57 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # IFS=: 00:06:50.407 11:58:57 -- accel/accel.sh@20 -- # read -r var val 00:06:50.407 11:58:57 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:50.407 11:58:57 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:50.407 11:58:57 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.407 00:06:50.407 real 0m2.855s 00:06:50.407 user 0m2.531s 00:06:50.407 sys 0m0.310s 00:06:50.407 11:58:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.407 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.407 ************************************ 00:06:50.407 END TEST accel_decomp 00:06:50.407 ************************************ 00:06:50.407 11:58:57 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.407 11:58:57 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:50.407 11:58:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:50.407 11:58:57 -- common/autotest_common.sh@10 -- # set +x 00:06:50.407 ************************************ 00:06:50.407 START TEST accel_decmop_full 00:06:50.407 ************************************ 00:06:50.407 11:58:57 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.407 11:58:57 -- accel/accel.sh@16 -- # local accel_opc 00:06:50.407 11:58:57 -- accel/accel.sh@17 -- # local accel_module 00:06:50.407 11:58:57 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.407 11:58:57 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:50.407 11:58:57 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.407 11:58:57 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.407 11:58:57 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.407 11:58:57 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.407 11:58:57 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.407 11:58:57 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.407 11:58:57 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.407 11:58:57 -- accel/accel.sh@42 -- # jq -r . 00:06:50.407 [2024-07-25 11:58:57.354433] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:50.407 [2024-07-25 11:58:57.354481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192100 ] 00:06:50.407 [2024-07-25 11:58:57.440611] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.407 [2024-07-25 11:58:57.524212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.786 11:58:58 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:51.786 00:06:51.786 SPDK Configuration: 00:06:51.786 Core mask: 0x1 00:06:51.786 00:06:51.786 Accel Perf Configuration: 00:06:51.786 Workload Type: decompress 00:06:51.786 Transfer size: 111250 bytes 00:06:51.786 Vector count 1 00:06:51.786 Module: software 00:06:51.786 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:51.786 Queue depth: 32 00:06:51.786 Allocate depth: 32 00:06:51.786 # threads/core: 1 00:06:51.786 Run time: 1 seconds 00:06:51.786 Verify: Yes 00:06:51.786 00:06:51.786 Running for 1 seconds... 00:06:51.786 00:06:51.786 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:51.786 ------------------------------------------------------------------------------------ 00:06:51.786 0,0 5760/s 237 MiB/s 0 0 00:06:51.786 ==================================================================================== 00:06:51.786 Total 5760/s 611 MiB/s 0 0' 00:06:51.786 11:58:58 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.786 11:58:58 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:51.786 11:58:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.786 11:58:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.786 11:58:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.786 11:58:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.786 11:58:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.786 11:58:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.786 11:58:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.786 11:58:58 -- accel/accel.sh@42 -- # jq -r . 00:06:51.786 [2024-07-25 11:58:58.805352] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:51.786 [2024-07-25 11:58:58.805411] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192283 ] 00:06:51.786 [2024-07-25 11:58:58.892842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.786 [2024-07-25 11:58:58.983362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=0x1 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=decompress 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=software 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@23 -- # accel_module=software 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=32 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=32 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=1 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val=Yes 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.786 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:51.786 11:58:59 -- accel/accel.sh@21 -- # val= 00:06:51.786 11:58:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.787 11:58:59 -- accel/accel.sh@20 -- # IFS=: 00:06:51.787 11:58:59 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@21 -- # val= 00:06:53.162 11:59:00 -- accel/accel.sh@22 -- # case "$var" in 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # IFS=: 00:06:53.162 11:59:00 -- accel/accel.sh@20 -- # read -r var val 00:06:53.162 11:59:00 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:53.162 11:59:00 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:53.162 11:59:00 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:53.162 00:06:53.162 real 0m2.895s 00:06:53.162 user 0m2.573s 00:06:53.162 sys 0m0.295s 00:06:53.162 11:59:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.162 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.162 ************************************ 00:06:53.162 END TEST accel_decmop_full 00:06:53.162 ************************************ 00:06:53.162 11:59:00 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:53.162 11:59:00 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:53.162 11:59:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:53.162 11:59:00 -- common/autotest_common.sh@10 -- # set +x 00:06:53.162 ************************************ 00:06:53.162 START TEST accel_decomp_mcore 00:06:53.162 ************************************ 00:06:53.162 11:59:00 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:53.162 11:59:00 -- accel/accel.sh@16 -- # local accel_opc 00:06:53.162 11:59:00 -- accel/accel.sh@17 -- # local accel_module 00:06:53.162 11:59:00 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:53.162 11:59:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:53.162 11:59:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:53.162 11:59:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:53.162 11:59:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:53.162 11:59:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:53.162 11:59:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:53.162 11:59:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:53.162 11:59:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:53.162 11:59:00 -- accel/accel.sh@42 -- # jq -r . 00:06:53.162 [2024-07-25 11:59:00.305719] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:53.163 [2024-07-25 11:59:00.305772] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192490 ] 00:06:53.163 [2024-07-25 11:59:00.393642] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:53.422 [2024-07-25 11:59:00.478481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:53.422 [2024-07-25 11:59:00.478567] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.422 [2024-07-25 11:59:00.478648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.422 [2024-07-25 11:59:00.478651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.798 11:59:01 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:54.798 00:06:54.798 SPDK Configuration: 00:06:54.798 Core mask: 0xf 00:06:54.798 00:06:54.798 Accel Perf Configuration: 00:06:54.798 Workload Type: decompress 00:06:54.798 Transfer size: 4096 bytes 00:06:54.798 Vector count 1 00:06:54.798 Module: software 00:06:54.798 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:54.798 Queue depth: 32 00:06:54.798 Allocate depth: 32 00:06:54.798 # threads/core: 1 00:06:54.798 Run time: 1 seconds 00:06:54.798 Verify: Yes 00:06:54.798 00:06:54.798 Running for 1 seconds... 00:06:54.798 00:06:54.798 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:54.798 ------------------------------------------------------------------------------------ 00:06:54.799 0,0 70144/s 129 MiB/s 0 0 00:06:54.799 3,0 72576/s 133 MiB/s 0 0 00:06:54.799 2,0 72608/s 133 MiB/s 0 0 00:06:54.799 1,0 72576/s 133 MiB/s 0 0 00:06:54.799 ==================================================================================== 00:06:54.799 Total 287904/s 1124 MiB/s 0 0' 00:06:54.799 11:59:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:54.799 11:59:01 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.799 11:59:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.799 11:59:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.799 11:59:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.799 11:59:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.799 11:59:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.799 11:59:01 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.799 11:59:01 -- accel/accel.sh@42 -- # jq -r . 00:06:54.799 [2024-07-25 11:59:01.732585] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:54.799 [2024-07-25 11:59:01.732637] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192731 ] 00:06:54.799 [2024-07-25 11:59:01.816550] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:54.799 [2024-07-25 11:59:01.901957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.799 [2024-07-25 11:59:01.902047] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:54.799 [2024-07-25 11:59:01.902129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:54.799 [2024-07-25 11:59:01.902131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=0xf 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=decompress 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=software 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=32 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=32 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=1 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val=Yes 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:54.799 11:59:01 -- accel/accel.sh@21 -- # val= 00:06:54.799 11:59:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # IFS=: 00:06:54.799 11:59:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@21 -- # val= 00:06:56.205 11:59:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # IFS=: 00:06:56.205 11:59:03 -- accel/accel.sh@20 -- # read -r var val 00:06:56.205 11:59:03 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.205 11:59:03 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:56.205 11:59:03 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.205 00:06:56.205 real 0m2.869s 00:06:56.205 user 0m9.305s 00:06:56.205 sys 0m0.317s 00:06:56.205 11:59:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.205 11:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:56.205 ************************************ 00:06:56.205 END TEST accel_decomp_mcore 00:06:56.205 ************************************ 00:06:56.205 11:59:03 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.205 11:59:03 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:56.205 11:59:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.205 11:59:03 -- common/autotest_common.sh@10 -- # set +x 00:06:56.205 ************************************ 00:06:56.205 START TEST accel_decomp_full_mcore 00:06:56.205 ************************************ 00:06:56.205 11:59:03 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.205 11:59:03 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.205 11:59:03 -- accel/accel.sh@17 -- # local accel_module 00:06:56.205 11:59:03 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.205 11:59:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.205 11:59:03 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:56.205 11:59:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.205 11:59:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.205 11:59:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.205 11:59:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.205 11:59:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.205 11:59:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.205 11:59:03 -- accel/accel.sh@42 -- # jq -r . 00:06:56.205 [2024-07-25 11:59:03.219428] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:56.205 [2024-07-25 11:59:03.219492] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192993 ] 00:06:56.205 [2024-07-25 11:59:03.304222] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:56.205 [2024-07-25 11:59:03.391710] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.205 [2024-07-25 11:59:03.391797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:56.205 [2024-07-25 11:59:03.391875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:56.205 [2024-07-25 11:59:03.391877] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.582 11:59:04 -- accel/accel.sh@18 -- # out='Preparing input file... 00:06:57.582 00:06:57.582 SPDK Configuration: 00:06:57.582 Core mask: 0xf 00:06:57.582 00:06:57.582 Accel Perf Configuration: 00:06:57.582 Workload Type: decompress 00:06:57.582 Transfer size: 111250 bytes 00:06:57.582 Vector count 1 00:06:57.582 Module: software 00:06:57.582 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:57.582 Queue depth: 32 00:06:57.582 Allocate depth: 32 00:06:57.582 # threads/core: 1 00:06:57.582 Run time: 1 seconds 00:06:57.582 Verify: Yes 00:06:57.582 00:06:57.582 Running for 1 seconds... 00:06:57.582 00:06:57.582 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:57.582 ------------------------------------------------------------------------------------ 00:06:57.582 0,0 5376/s 222 MiB/s 0 0 00:06:57.582 3,0 5568/s 230 MiB/s 0 0 00:06:57.582 2,0 5536/s 228 MiB/s 0 0 00:06:57.582 1,0 5536/s 228 MiB/s 0 0 00:06:57.582 ==================================================================================== 00:06:57.582 Total 22016/s 2335 MiB/s 0 0' 00:06:57.582 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.582 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.582 11:59:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.582 11:59:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:57.582 11:59:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:57.582 11:59:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:57.582 11:59:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.582 11:59:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.582 11:59:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:57.582 11:59:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:57.582 11:59:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:57.582 11:59:04 -- accel/accel.sh@42 -- # jq -r . 00:06:57.582 [2024-07-25 11:59:04.689605] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:57.582 [2024-07-25 11:59:04.689669] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193221 ] 00:06:57.582 [2024-07-25 11:59:04.774573] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:57.582 [2024-07-25 11:59:04.859616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.582 [2024-07-25 11:59:04.859705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.582 [2024-07-25 11:59:04.859781] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:57.582 [2024-07-25 11:59:04.859783] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=0xf 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=decompress 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@24 -- # accel_opc=decompress 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val='111250 bytes' 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=software 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@23 -- # accel_module=software 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=32 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=32 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=1 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val=Yes 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:57.842 11:59:04 -- accel/accel.sh@21 -- # val= 00:06:57.842 11:59:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # IFS=: 00:06:57.842 11:59:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@21 -- # val= 00:06:59.220 11:59:06 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # IFS=: 00:06:59.220 11:59:06 -- accel/accel.sh@20 -- # read -r var val 00:06:59.220 11:59:06 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.221 11:59:06 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:06:59.221 11:59:06 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.221 00:06:59.221 real 0m2.940s 00:06:59.221 user 0m9.501s 00:06:59.221 sys 0m0.331s 00:06:59.221 11:59:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.221 11:59:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.221 ************************************ 00:06:59.221 END TEST accel_decomp_full_mcore 00:06:59.221 ************************************ 00:06:59.221 11:59:06 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.221 11:59:06 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:06:59.221 11:59:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.221 11:59:06 -- common/autotest_common.sh@10 -- # set +x 00:06:59.221 ************************************ 00:06:59.221 START TEST accel_decomp_mthread 00:06:59.221 ************************************ 00:06:59.221 11:59:06 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.221 11:59:06 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.221 11:59:06 -- accel/accel.sh@17 -- # local accel_module 00:06:59.221 11:59:06 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.221 11:59:06 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.221 11:59:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:59.221 11:59:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.221 11:59:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.221 11:59:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.221 11:59:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.221 11:59:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.221 11:59:06 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.221 11:59:06 -- accel/accel.sh@42 -- # jq -r . 00:06:59.221 [2024-07-25 11:59:06.203542] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:06:59.221 [2024-07-25 11:59:06.203607] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193423 ] 00:06:59.221 [2024-07-25 11:59:06.289028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.221 [2024-07-25 11:59:06.371588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.600 11:59:07 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:00.600 00:07:00.600 SPDK Configuration: 00:07:00.600 Core mask: 0x1 00:07:00.600 00:07:00.600 Accel Perf Configuration: 00:07:00.600 Workload Type: decompress 00:07:00.600 Transfer size: 4096 bytes 00:07:00.600 Vector count 1 00:07:00.600 Module: software 00:07:00.600 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:00.600 Queue depth: 32 00:07:00.600 Allocate depth: 32 00:07:00.600 # threads/core: 2 00:07:00.600 Run time: 1 seconds 00:07:00.600 Verify: Yes 00:07:00.600 00:07:00.600 Running for 1 seconds... 00:07:00.600 00:07:00.600 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.600 ------------------------------------------------------------------------------------ 00:07:00.600 0,1 44352/s 81 MiB/s 0 0 00:07:00.600 0,0 44192/s 81 MiB/s 0 0 00:07:00.600 ==================================================================================== 00:07:00.600 Total 88544/s 345 MiB/s 0 0' 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.600 11:59:07 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:00.600 11:59:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.600 11:59:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.600 11:59:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.600 11:59:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.600 11:59:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.600 11:59:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.600 11:59:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.600 11:59:07 -- accel/accel.sh@42 -- # jq -r . 00:07:00.600 [2024-07-25 11:59:07.643993] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:00.600 [2024-07-25 11:59:07.644053] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193606 ] 00:07:00.600 [2024-07-25 11:59:07.729737] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.600 [2024-07-25 11:59:07.809804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=0x1 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=decompress 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=software 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=32 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=32 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=2 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val=Yes 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:00.600 11:59:07 -- accel/accel.sh@21 -- # val= 00:07:00.600 11:59:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # IFS=: 00:07:00.600 11:59:07 -- accel/accel.sh@20 -- # read -r var val 00:07:01.980 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.980 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.980 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.980 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.980 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.980 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.980 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.980 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.981 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.981 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.981 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.981 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@21 -- # val= 00:07:01.981 11:59:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # IFS=: 00:07:01.981 11:59:09 -- accel/accel.sh@20 -- # read -r var val 00:07:01.981 11:59:09 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:01.981 11:59:09 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:01.981 11:59:09 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.981 00:07:01.981 real 0m2.876s 00:07:01.981 user 0m2.565s 00:07:01.981 sys 0m0.303s 00:07:01.981 11:59:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.981 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:07:01.981 ************************************ 00:07:01.981 END TEST accel_decomp_mthread 00:07:01.981 ************************************ 00:07:01.981 11:59:09 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.981 11:59:09 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:01.981 11:59:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:01.981 11:59:09 -- common/autotest_common.sh@10 -- # set +x 00:07:01.981 ************************************ 00:07:01.981 START TEST accel_deomp_full_mthread 00:07:01.981 ************************************ 00:07:01.981 11:59:09 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.981 11:59:09 -- accel/accel.sh@16 -- # local accel_opc 00:07:01.981 11:59:09 -- accel/accel.sh@17 -- # local accel_module 00:07:01.981 11:59:09 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.981 11:59:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:01.981 11:59:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:01.981 11:59:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:01.981 11:59:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.981 11:59:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.981 11:59:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:01.981 11:59:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:01.981 11:59:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:01.981 11:59:09 -- accel/accel.sh@42 -- # jq -r . 00:07:01.981 [2024-07-25 11:59:09.114630] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:01.981 [2024-07-25 11:59:09.114677] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193809 ] 00:07:01.981 [2024-07-25 11:59:09.198793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.981 [2024-07-25 11:59:09.281398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.359 11:59:10 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:03.359 00:07:03.359 SPDK Configuration: 00:07:03.359 Core mask: 0x1 00:07:03.359 00:07:03.359 Accel Perf Configuration: 00:07:03.359 Workload Type: decompress 00:07:03.359 Transfer size: 111250 bytes 00:07:03.359 Vector count 1 00:07:03.359 Module: software 00:07:03.359 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:03.359 Queue depth: 32 00:07:03.360 Allocate depth: 32 00:07:03.360 # threads/core: 2 00:07:03.360 Run time: 1 seconds 00:07:03.360 Verify: Yes 00:07:03.360 00:07:03.360 Running for 1 seconds... 00:07:03.360 00:07:03.360 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.360 ------------------------------------------------------------------------------------ 00:07:03.360 0,1 2912/s 120 MiB/s 0 0 00:07:03.360 0,0 2912/s 120 MiB/s 0 0 00:07:03.360 ==================================================================================== 00:07:03.360 Total 5824/s 617 MiB/s 0 0' 00:07:03.360 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.360 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.360 11:59:10 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.360 11:59:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:03.360 11:59:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.360 11:59:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.360 11:59:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.360 11:59:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.360 11:59:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.360 11:59:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.360 11:59:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.360 11:59:10 -- accel/accel.sh@42 -- # jq -r . 00:07:03.360 [2024-07-25 11:59:10.569683] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:03.360 [2024-07-25 11:59:10.569744] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193990 ] 00:07:03.360 [2024-07-25 11:59:10.655543] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.618 [2024-07-25 11:59:10.739532] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.618 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.618 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.618 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.618 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.618 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.618 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.618 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.618 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.618 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.618 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.618 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=0x1 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=decompress 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=software 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=32 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=32 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=2 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val=Yes 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:03.619 11:59:10 -- accel/accel.sh@21 -- # val= 00:07:03.619 11:59:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # IFS=: 00:07:03.619 11:59:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.997 11:59:12 -- accel/accel.sh@21 -- # val= 00:07:04.997 11:59:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.997 11:59:12 -- accel/accel.sh@20 -- # IFS=: 00:07:04.998 11:59:12 -- accel/accel.sh@20 -- # read -r var val 00:07:04.998 11:59:12 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.998 11:59:12 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:04.998 11:59:12 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.998 00:07:04.998 real 0m2.916s 00:07:04.998 user 0m2.596s 00:07:04.998 sys 0m0.310s 00:07:04.998 11:59:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.998 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:07:04.998 ************************************ 00:07:04.998 END TEST accel_deomp_full_mthread 00:07:04.998 ************************************ 00:07:04.998 11:59:12 -- accel/accel.sh@116 -- # [[ y == y ]] 00:07:04.998 11:59:12 -- accel/accel.sh@117 -- # COMPRESSDEV=1 00:07:04.998 11:59:12 -- accel/accel.sh@118 -- # get_expected_opcs 00:07:04.998 11:59:12 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:04.998 11:59:12 -- accel/accel.sh@59 -- # spdk_tgt_pid=1194185 00:07:04.998 11:59:12 -- accel/accel.sh@60 -- # waitforlisten 1194185 00:07:04.998 11:59:12 -- common/autotest_common.sh@819 -- # '[' -z 1194185 ']' 00:07:04.998 11:59:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.998 11:59:12 -- accel/accel.sh@58 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:04.998 11:59:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:04.998 11:59:12 -- accel/accel.sh@58 -- # build_accel_config 00:07:04.998 11:59:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.998 11:59:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.998 11:59:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:04.998 11:59:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.998 11:59:12 -- common/autotest_common.sh@10 -- # set +x 00:07:04.998 11:59:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.998 11:59:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.998 11:59:12 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:04.998 11:59:12 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:04.998 11:59:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.998 11:59:12 -- accel/accel.sh@42 -- # jq -r . 00:07:04.998 [2024-07-25 11:59:12.114298] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:04.998 [2024-07-25 11:59:12.114362] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194185 ] 00:07:04.998 [2024-07-25 11:59:12.200441] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.998 [2024-07-25 11:59:12.288080] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:04.998 [2024-07-25 11:59:12.288227] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.565 [2024-07-25 11:59:12.825223] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:06.502 11:59:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:06.502 11:59:13 -- common/autotest_common.sh@852 -- # return 0 00:07:06.502 11:59:13 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:06.502 11:59:13 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:06.502 11:59:13 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:06.502 11:59:13 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:06.502 11:59:13 -- common/autotest_common.sh@10 -- # set +x 00:07:06.502 11:59:13 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=dpdk_compressdev 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # IFS== 00:07:06.502 11:59:13 -- accel/accel.sh@64 -- # read -r opc module 00:07:06.502 11:59:13 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:06.502 11:59:13 -- accel/accel.sh@67 -- # killprocess 1194185 00:07:06.502 11:59:13 -- common/autotest_common.sh@926 -- # '[' -z 1194185 ']' 00:07:06.502 11:59:13 -- common/autotest_common.sh@930 -- # kill -0 1194185 00:07:06.502 11:59:13 -- common/autotest_common.sh@931 -- # uname 00:07:06.502 11:59:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:06.502 11:59:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1194185 00:07:06.502 11:59:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:06.502 11:59:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:06.502 11:59:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1194185' 00:07:06.502 killing process with pid 1194185 00:07:06.502 11:59:13 -- common/autotest_common.sh@945 -- # kill 1194185 00:07:06.502 11:59:13 -- common/autotest_common.sh@950 -- # wait 1194185 00:07:06.760 11:59:14 -- accel/accel.sh@68 -- # trap - ERR 00:07:06.760 11:59:14 -- accel/accel.sh@119 -- # run_test accel_cdev_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:06.760 11:59:14 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:06.760 11:59:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:06.761 11:59:14 -- common/autotest_common.sh@10 -- # set +x 00:07:06.761 ************************************ 00:07:06.761 START TEST accel_cdev_comp 00:07:06.761 ************************************ 00:07:06.761 11:59:14 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:06.761 11:59:14 -- accel/accel.sh@16 -- # local accel_opc 00:07:06.761 11:59:14 -- accel/accel.sh@17 -- # local accel_module 00:07:06.761 11:59:14 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:06.761 11:59:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:06.761 11:59:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.761 11:59:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.761 11:59:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.761 11:59:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.761 11:59:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.761 11:59:14 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:06.761 11:59:14 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:07.020 11:59:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.020 11:59:14 -- accel/accel.sh@42 -- # jq -r . 00:07:07.020 [2024-07-25 11:59:14.098231] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:07.020 [2024-07-25 11:59:14.098308] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194522 ] 00:07:07.020 [2024-07-25 11:59:14.185506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.020 [2024-07-25 11:59:14.267345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.587 [2024-07-25 11:59:14.797542] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:07.587 [2024-07-25 11:59:14.799473] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2782420 PMD being used: compress_qat 00:07:07.587 [2024-07-25 11:59:14.802914] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x23e47c0 PMD being used: compress_qat 00:07:08.962 11:59:15 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:08.962 00:07:08.962 SPDK Configuration: 00:07:08.962 Core mask: 0x1 00:07:08.962 00:07:08.962 Accel Perf Configuration: 00:07:08.962 Workload Type: compress 00:07:08.962 Transfer size: 4096 bytes 00:07:08.962 Vector count 1 00:07:08.962 Module: dpdk_compressdev 00:07:08.962 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:08.962 Queue depth: 32 00:07:08.962 Allocate depth: 32 00:07:08.962 # threads/core: 1 00:07:08.962 Run time: 1 seconds 00:07:08.962 Verify: No 00:07:08.962 00:07:08.962 Running for 1 seconds... 00:07:08.962 00:07:08.962 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.962 ------------------------------------------------------------------------------------ 00:07:08.962 0,0 170645/s 711 MiB/s 0 0 00:07:08.962 ==================================================================================== 00:07:08.962 Total 170645/s 666 MiB/s 0 0' 00:07:08.962 11:59:15 -- accel/accel.sh@20 -- # IFS=: 00:07:08.962 11:59:15 -- accel/accel.sh@20 -- # read -r var val 00:07:08.962 11:59:15 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:08.962 11:59:15 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:08.962 11:59:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.962 11:59:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.962 11:59:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.962 11:59:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.962 11:59:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.962 11:59:16 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:08.962 11:59:16 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:08.962 11:59:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.962 11:59:16 -- accel/accel.sh@42 -- # jq -r . 00:07:08.962 [2024-07-25 11:59:16.029135] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:08.962 [2024-07-25 11:59:16.029195] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194739 ] 00:07:08.962 [2024-07-25 11:59:16.118139] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.962 [2024-07-25 11:59:16.200401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.530 [2024-07-25 11:59:16.742073] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:09.530 [2024-07-25 11:59:16.744068] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25ce420 PMD being used: compress_qat 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 [2024-07-25 11:59:16.747612] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22307c0 PMD being used: compress_qat 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=0x1 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=compress 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=32 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=32 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.530 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.530 11:59:16 -- accel/accel.sh@21 -- # val=1 00:07:09.530 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.531 11:59:16 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:09.531 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.531 11:59:16 -- accel/accel.sh@21 -- # val=No 00:07:09.531 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.531 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.531 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:09.531 11:59:16 -- accel/accel.sh@21 -- # val= 00:07:09.531 11:59:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # IFS=: 00:07:09.531 11:59:16 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@21 -- # val= 00:07:10.908 11:59:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # IFS=: 00:07:10.908 11:59:17 -- accel/accel.sh@20 -- # read -r var val 00:07:10.908 11:59:17 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:10.908 11:59:17 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:10.908 11:59:17 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:10.908 00:07:10.908 real 0m3.875s 00:07:10.908 user 0m3.022s 00:07:10.908 sys 0m0.838s 00:07:10.908 11:59:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.908 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.908 ************************************ 00:07:10.908 END TEST accel_cdev_comp 00:07:10.908 ************************************ 00:07:10.908 11:59:17 -- accel/accel.sh@120 -- # run_test accel_cdev_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:10.908 11:59:17 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:10.908 11:59:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.908 11:59:17 -- common/autotest_common.sh@10 -- # set +x 00:07:10.908 ************************************ 00:07:10.908 START TEST accel_cdev_decomp 00:07:10.908 ************************************ 00:07:10.908 11:59:17 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:10.908 11:59:17 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.908 11:59:17 -- accel/accel.sh@17 -- # local accel_module 00:07:10.908 11:59:17 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:10.908 11:59:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.908 11:59:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:10.908 11:59:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.908 11:59:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.908 11:59:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.908 11:59:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.908 11:59:17 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:10.908 11:59:17 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:10.908 11:59:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.908 11:59:17 -- accel/accel.sh@42 -- # jq -r . 00:07:10.908 [2024-07-25 11:59:18.021402] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:10.908 [2024-07-25 11:59:18.021464] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195027 ] 00:07:10.908 [2024-07-25 11:59:18.107242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.908 [2024-07-25 11:59:18.190919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.475 [2024-07-25 11:59:18.728124] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:11.475 [2024-07-25 11:59:18.730152] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2680420 PMD being used: compress_qat 00:07:11.475 [2024-07-25 11:59:18.733737] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22e27c0 PMD being used: compress_qat 00:07:12.851 11:59:19 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:12.851 00:07:12.851 SPDK Configuration: 00:07:12.851 Core mask: 0x1 00:07:12.851 00:07:12.851 Accel Perf Configuration: 00:07:12.851 Workload Type: decompress 00:07:12.851 Transfer size: 4096 bytes 00:07:12.851 Vector count 1 00:07:12.851 Module: dpdk_compressdev 00:07:12.851 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:12.851 Queue depth: 32 00:07:12.851 Allocate depth: 32 00:07:12.851 # threads/core: 1 00:07:12.851 Run time: 1 seconds 00:07:12.851 Verify: Yes 00:07:12.851 00:07:12.851 Running for 1 seconds... 00:07:12.851 00:07:12.851 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:12.851 ------------------------------------------------------------------------------------ 00:07:12.851 0,0 176580/s 313 MiB/s 0 0 00:07:12.851 ==================================================================================== 00:07:12.851 Total 176580/s 689 MiB/s 0 0' 00:07:12.851 11:59:19 -- accel/accel.sh@20 -- # IFS=: 00:07:12.851 11:59:19 -- accel/accel.sh@20 -- # read -r var val 00:07:12.851 11:59:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:12.851 11:59:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y 00:07:12.851 11:59:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.851 11:59:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.851 11:59:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.851 11:59:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.851 11:59:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.851 11:59:19 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:12.851 11:59:19 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:12.851 11:59:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.851 11:59:19 -- accel/accel.sh@42 -- # jq -r . 00:07:12.851 [2024-07-25 11:59:19.956766] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:12.851 [2024-07-25 11:59:19.956828] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195298 ] 00:07:12.851 [2024-07-25 11:59:20.046162] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.851 [2024-07-25 11:59:20.126154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.418 [2024-07-25 11:59:20.661807] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:13.418 [2024-07-25 11:59:20.663795] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x22d5420 PMD being used: compress_qat 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.418 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.418 [2024-07-25 11:59:20.667356] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1f377c0 PMD being used: compress_qat 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.418 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.418 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val=0x1 00:07:13.418 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.418 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.418 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.418 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=decompress 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=32 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=32 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=1 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val=Yes 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:13.419 11:59:20 -- accel/accel.sh@21 -- # val= 00:07:13.419 11:59:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # IFS=: 00:07:13.419 11:59:20 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@21 -- # val= 00:07:14.796 11:59:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # IFS=: 00:07:14.796 11:59:21 -- accel/accel.sh@20 -- # read -r var val 00:07:14.796 11:59:21 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:14.796 11:59:21 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:14.796 11:59:21 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:14.796 00:07:14.796 real 0m3.873s 00:07:14.796 user 0m3.018s 00:07:14.796 sys 0m0.831s 00:07:14.796 11:59:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.796 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:14.796 ************************************ 00:07:14.796 END TEST accel_cdev_decomp 00:07:14.796 ************************************ 00:07:14.796 11:59:21 -- accel/accel.sh@121 -- # run_test accel_cdev_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.796 11:59:21 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:14.796 11:59:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:14.796 11:59:21 -- common/autotest_common.sh@10 -- # set +x 00:07:14.796 ************************************ 00:07:14.796 START TEST accel_cdev_decmop_full 00:07:14.796 ************************************ 00:07:14.796 11:59:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.796 11:59:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:14.796 11:59:21 -- accel/accel.sh@17 -- # local accel_module 00:07:14.796 11:59:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.796 11:59:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:14.796 11:59:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.797 11:59:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.797 11:59:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.797 11:59:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.797 11:59:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.797 11:59:21 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:14.797 11:59:21 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:14.797 11:59:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.797 11:59:21 -- accel/accel.sh@42 -- # jq -r . 00:07:14.797 [2024-07-25 11:59:21.944461] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:14.797 [2024-07-25 11:59:21.944526] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195511 ] 00:07:14.797 [2024-07-25 11:59:22.030417] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.056 [2024-07-25 11:59:22.116191] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.624 [2024-07-25 11:59:22.643662] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:15.624 [2024-07-25 11:59:22.645588] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x20d5420 PMD being used: compress_qat 00:07:15.624 [2024-07-25 11:59:22.648193] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1d37520 PMD being used: compress_qat 00:07:16.630 11:59:23 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:16.630 00:07:16.630 SPDK Configuration: 00:07:16.630 Core mask: 0x1 00:07:16.630 00:07:16.630 Accel Perf Configuration: 00:07:16.630 Workload Type: decompress 00:07:16.630 Transfer size: 111250 bytes 00:07:16.630 Vector count 1 00:07:16.630 Module: dpdk_compressdev 00:07:16.630 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:16.630 Queue depth: 32 00:07:16.630 Allocate depth: 32 00:07:16.630 # threads/core: 1 00:07:16.630 Run time: 1 seconds 00:07:16.630 Verify: Yes 00:07:16.630 00:07:16.630 Running for 1 seconds... 00:07:16.630 00:07:16.630 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.630 ------------------------------------------------------------------------------------ 00:07:16.630 0,0 55621/s 2132 MiB/s 0 0 00:07:16.630 ==================================================================================== 00:07:16.630 Total 55621/s 5901 MiB/s 0 0' 00:07:16.630 11:59:23 -- accel/accel.sh@20 -- # IFS=: 00:07:16.630 11:59:23 -- accel/accel.sh@20 -- # read -r var val 00:07:16.630 11:59:23 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.630 11:59:23 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:16.630 11:59:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.630 11:59:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.630 11:59:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.630 11:59:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.630 11:59:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.630 11:59:23 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:16.630 11:59:23 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:16.630 11:59:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.630 11:59:23 -- accel/accel.sh@42 -- # jq -r . 00:07:16.630 [2024-07-25 11:59:23.862040] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:16.630 [2024-07-25 11:59:23.862102] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195859 ] 00:07:16.889 [2024-07-25 11:59:23.946757] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.889 [2024-07-25 11:59:24.028118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.458 [2024-07-25 11:59:24.556338] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:17.458 [2024-07-25 11:59:24.558361] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xf27420 PMD being used: compress_qat 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 [2024-07-25 11:59:24.561066] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xb89520 PMD being used: compress_qat 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=0x1 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=decompress 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=32 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=32 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=1 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val=Yes 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:17.458 11:59:24 -- accel/accel.sh@21 -- # val= 00:07:17.458 11:59:24 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # IFS=: 00:07:17.458 11:59:24 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@21 -- # val= 00:07:18.837 11:59:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # IFS=: 00:07:18.837 11:59:25 -- accel/accel.sh@20 -- # read -r var val 00:07:18.837 11:59:25 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:18.837 11:59:25 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:18.837 11:59:25 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:18.837 00:07:18.837 real 0m3.848s 00:07:18.837 user 0m2.979s 00:07:18.837 sys 0m0.853s 00:07:18.837 11:59:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.837 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.837 ************************************ 00:07:18.837 END TEST accel_cdev_decmop_full 00:07:18.837 ************************************ 00:07:18.837 11:59:25 -- accel/accel.sh@122 -- # run_test accel_cdev_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.837 11:59:25 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:18.837 11:59:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:18.837 11:59:25 -- common/autotest_common.sh@10 -- # set +x 00:07:18.837 ************************************ 00:07:18.837 START TEST accel_cdev_decomp_mcore 00:07:18.837 ************************************ 00:07:18.837 11:59:25 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.837 11:59:25 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.837 11:59:25 -- accel/accel.sh@17 -- # local accel_module 00:07:18.837 11:59:25 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.837 11:59:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:18.837 11:59:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.837 11:59:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.837 11:59:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.837 11:59:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.837 11:59:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.837 11:59:25 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:18.837 11:59:25 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:18.837 11:59:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.837 11:59:25 -- accel/accel.sh@42 -- # jq -r . 00:07:18.837 [2024-07-25 11:59:25.825321] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:18.837 [2024-07-25 11:59:25.825371] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196062 ] 00:07:18.837 [2024-07-25 11:59:25.911717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:18.837 [2024-07-25 11:59:25.997876] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.837 [2024-07-25 11:59:25.997964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:18.837 [2024-07-25 11:59:25.998042] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:18.837 [2024-07-25 11:59:25.998044] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.406 [2024-07-25 11:59:26.565200] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:19.406 [2024-07-25 11:59:26.567180] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2a1eac0 PMD being used: compress_qat 00:07:19.406 [2024-07-25 11:59:26.572004] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f24e4197a30 PMD being used: compress_qat 00:07:19.406 [2024-07-25 11:59:26.573149] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f24dc197a30 PMD being used: compress_qat 00:07:19.406 [2024-07-25 11:59:26.573513] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2887c30 PMD being used: compress_qat 00:07:19.406 [2024-07-25 11:59:26.573720] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f24d4197a30 PMD being used: compress_qat 00:07:20.784 11:59:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:20.784 00:07:20.784 SPDK Configuration: 00:07:20.784 Core mask: 0xf 00:07:20.784 00:07:20.784 Accel Perf Configuration: 00:07:20.784 Workload Type: decompress 00:07:20.784 Transfer size: 4096 bytes 00:07:20.784 Vector count 1 00:07:20.784 Module: dpdk_compressdev 00:07:20.784 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:20.784 Queue depth: 32 00:07:20.784 Allocate depth: 32 00:07:20.784 # threads/core: 1 00:07:20.784 Run time: 1 seconds 00:07:20.784 Verify: Yes 00:07:20.784 00:07:20.784 Running for 1 seconds... 00:07:20.784 00:07:20.784 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:20.784 ------------------------------------------------------------------------------------ 00:07:20.784 0,0 63060/s 112 MiB/s 0 0 00:07:20.784 3,0 64032/s 113 MiB/s 0 0 00:07:20.784 2,0 63942/s 113 MiB/s 0 0 00:07:20.784 1,0 63875/s 113 MiB/s 0 0 00:07:20.784 ==================================================================================== 00:07:20.784 Total 254909/s 995 MiB/s 0 0' 00:07:20.784 11:59:27 -- accel/accel.sh@20 -- # IFS=: 00:07:20.784 11:59:27 -- accel/accel.sh@20 -- # read -r var val 00:07:20.784 11:59:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.784 11:59:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:20.785 11:59:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.785 11:59:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.785 11:59:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.785 11:59:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.785 11:59:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.785 11:59:27 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:20.785 11:59:27 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:20.785 11:59:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.785 11:59:27 -- accel/accel.sh@42 -- # jq -r . 00:07:20.785 [2024-07-25 11:59:27.799360] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:20.785 [2024-07-25 11:59:27.799418] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196418 ] 00:07:20.785 [2024-07-25 11:59:27.886691] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:20.785 [2024-07-25 11:59:27.971807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.785 [2024-07-25 11:59:27.971894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:20.785 [2024-07-25 11:59:27.971970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:20.785 [2024-07-25 11:59:27.971971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.354 [2024-07-25 11:59:28.531773] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:21.354 [2024-07-25 11:59:28.533739] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x10a5ac0 PMD being used: compress_qat 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 [2024-07-25 11:59:28.538664] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f6944197a30 PMD being used: compress_qat 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=0xf 00:07:21.354 [2024-07-25 11:59:28.539732] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f693c197a30 PMD being used: compress_qat 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 [2024-07-25 11:59:28.540134] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xf0ec30 PMD being used: compress_qat 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 [2024-07-25 11:59:28.540376] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f6934197a30 PMD being used: compress_qat 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=decompress 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=32 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=32 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=1 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val=Yes 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:21.354 11:59:28 -- accel/accel.sh@21 -- # val= 00:07:21.354 11:59:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # IFS=: 00:07:21.354 11:59:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@21 -- # val= 00:07:22.734 11:59:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # IFS=: 00:07:22.734 11:59:29 -- accel/accel.sh@20 -- # read -r var val 00:07:22.734 11:59:29 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:22.734 11:59:29 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:22.734 11:59:29 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:22.734 00:07:22.734 real 0m3.946s 00:07:22.734 user 0m12.979s 00:07:22.734 sys 0m0.901s 00:07:22.734 11:59:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.734 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.734 ************************************ 00:07:22.734 END TEST accel_cdev_decomp_mcore 00:07:22.734 ************************************ 00:07:22.734 11:59:29 -- accel/accel.sh@123 -- # run_test accel_cdev_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.734 11:59:29 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:22.734 11:59:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:22.734 11:59:29 -- common/autotest_common.sh@10 -- # set +x 00:07:22.734 ************************************ 00:07:22.734 START TEST accel_cdev_decomp_full_mcore 00:07:22.734 ************************************ 00:07:22.734 11:59:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.734 11:59:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:22.734 11:59:29 -- accel/accel.sh@17 -- # local accel_module 00:07:22.734 11:59:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.734 11:59:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:22.734 11:59:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.734 11:59:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.734 11:59:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.734 11:59:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.734 11:59:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.734 11:59:29 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:22.734 11:59:29 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:22.734 11:59:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.734 11:59:29 -- accel/accel.sh@42 -- # jq -r . 00:07:22.734 [2024-07-25 11:59:29.835133] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:22.734 [2024-07-25 11:59:29.835209] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196625 ] 00:07:22.734 [2024-07-25 11:59:29.925594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:22.734 [2024-07-25 11:59:30.025975] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.734 [2024-07-25 11:59:30.025994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.734 [2024-07-25 11:59:30.026014] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.734 [2024-07-25 11:59:30.026017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.304 [2024-07-25 11:59:30.599718] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:23.304 [2024-07-25 11:59:30.601710] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x295cac0 PMD being used: compress_qat 00:07:23.304 [2024-07-25 11:59:30.605810] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8750197a30 PMD being used: compress_qat 00:07:23.304 [2024-07-25 11:59:30.606910] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8748197a30 PMD being used: compress_qat 00:07:23.304 [2024-07-25 11:59:30.607461] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25bf1a0 PMD being used: compress_qat 00:07:23.304 [2024-07-25 11:59:30.607599] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f8740197a30 PMD being used: compress_qat 00:07:24.682 11:59:31 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:24.682 00:07:24.682 SPDK Configuration: 00:07:24.682 Core mask: 0xf 00:07:24.682 00:07:24.682 Accel Perf Configuration: 00:07:24.682 Workload Type: decompress 00:07:24.682 Transfer size: 111250 bytes 00:07:24.682 Vector count 1 00:07:24.682 Module: dpdk_compressdev 00:07:24.682 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:24.682 Queue depth: 32 00:07:24.682 Allocate depth: 32 00:07:24.682 # threads/core: 1 00:07:24.682 Run time: 1 seconds 00:07:24.682 Verify: Yes 00:07:24.682 00:07:24.682 Running for 1 seconds... 00:07:24.682 00:07:24.682 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:24.682 ------------------------------------------------------------------------------------ 00:07:24.682 0,0 17094/s 655 MiB/s 0 0 00:07:24.682 3,0 17058/s 653 MiB/s 0 0 00:07:24.682 2,0 16972/s 650 MiB/s 0 0 00:07:24.682 1,0 17238/s 660 MiB/s 0 0 00:07:24.682 ==================================================================================== 00:07:24.682 Total 68362/s 7252 MiB/s 0 0' 00:07:24.682 11:59:31 -- accel/accel.sh@20 -- # IFS=: 00:07:24.682 11:59:31 -- accel/accel.sh@20 -- # read -r var val 00:07:24.682 11:59:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.682 11:59:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:24.682 11:59:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:24.682 11:59:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:24.682 11:59:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.682 11:59:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.682 11:59:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:24.682 11:59:31 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:24.682 11:59:31 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:24.682 11:59:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:24.682 11:59:31 -- accel/accel.sh@42 -- # jq -r . 00:07:24.682 [2024-07-25 11:59:31.850721] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:24.682 [2024-07-25 11:59:31.850784] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196983 ] 00:07:24.682 [2024-07-25 11:59:31.934711] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:24.941 [2024-07-25 11:59:32.021354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.941 [2024-07-25 11:59:32.021444] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.941 [2024-07-25 11:59:32.021520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:24.941 [2024-07-25 11:59:32.021522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.510 [2024-07-25 11:59:32.572898] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:25.510 [2024-07-25 11:59:32.574796] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2564ac0 PMD being used: compress_qat 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 [2024-07-25 11:59:32.578587] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f3c0c197a30 PMD being used: compress_qat 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=0xf 00:07:25.510 [2024-07-25 11:59:32.579627] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f3c04197a30 PMD being used: compress_qat 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 [2024-07-25 11:59:32.580073] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21c71a0 PMD being used: compress_qat 00:07:25.510 [2024-07-25 11:59:32.580270] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f3bfc197a30 PMD being used: compress_q 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 at 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=decompress 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=32 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=32 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=1 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val=Yes 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:25.510 11:59:32 -- accel/accel.sh@21 -- # val= 00:07:25.510 11:59:32 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # IFS=: 00:07:25.510 11:59:32 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@21 -- # val= 00:07:26.890 11:59:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # IFS=: 00:07:26.890 11:59:33 -- accel/accel.sh@20 -- # read -r var val 00:07:26.890 11:59:33 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:26.890 11:59:33 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.890 11:59:33 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:26.890 00:07:26.890 real 0m3.988s 00:07:26.890 user 0m13.044s 00:07:26.890 sys 0m0.892s 00:07:26.890 11:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.890 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:26.890 ************************************ 00:07:26.890 END TEST accel_cdev_decomp_full_mcore 00:07:26.890 ************************************ 00:07:26.890 11:59:33 -- accel/accel.sh@124 -- # run_test accel_cdev_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.890 11:59:33 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:26.890 11:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.890 11:59:33 -- common/autotest_common.sh@10 -- # set +x 00:07:26.890 ************************************ 00:07:26.890 START TEST accel_cdev_decomp_mthread 00:07:26.890 ************************************ 00:07:26.890 11:59:33 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.890 11:59:33 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.890 11:59:33 -- accel/accel.sh@17 -- # local accel_module 00:07:26.890 11:59:33 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.890 11:59:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.890 11:59:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:26.890 11:59:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.890 11:59:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.890 11:59:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.890 11:59:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.890 11:59:33 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:26.890 11:59:33 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:26.890 11:59:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.890 11:59:33 -- accel/accel.sh@42 -- # jq -r . 00:07:26.890 [2024-07-25 11:59:33.865925] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:26.890 [2024-07-25 11:59:33.865986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197192 ] 00:07:26.890 [2024-07-25 11:59:33.950133] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.890 [2024-07-25 11:59:34.031105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.460 [2024-07-25 11:59:34.562479] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:27.460 [2024-07-25 11:59:34.564514] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2256420 PMD being used: compress_qat 00:07:27.460 [2024-07-25 11:59:34.568656] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1eb87c0 PMD being used: compress_qat 00:07:27.460 [2024-07-25 11:59:34.570437] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x21e2090 PMD being used: compress_qat 00:07:28.839 11:59:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:28.839 00:07:28.839 SPDK Configuration: 00:07:28.839 Core mask: 0x1 00:07:28.839 00:07:28.839 Accel Perf Configuration: 00:07:28.839 Workload Type: decompress 00:07:28.839 Transfer size: 4096 bytes 00:07:28.839 Vector count 1 00:07:28.839 Module: dpdk_compressdev 00:07:28.839 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:28.839 Queue depth: 32 00:07:28.839 Allocate depth: 32 00:07:28.839 # threads/core: 2 00:07:28.839 Run time: 1 seconds 00:07:28.839 Verify: Yes 00:07:28.839 00:07:28.839 Running for 1 seconds... 00:07:28.839 00:07:28.839 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:28.839 ------------------------------------------------------------------------------------ 00:07:28.839 0,1 89563/s 159 MiB/s 0 0 00:07:28.839 0,0 89405/s 158 MiB/s 0 0 00:07:28.839 ==================================================================================== 00:07:28.839 Total 178968/s 699 MiB/s 0 0' 00:07:28.839 11:59:35 -- accel/accel.sh@20 -- # IFS=: 00:07:28.839 11:59:35 -- accel/accel.sh@20 -- # read -r var val 00:07:28.839 11:59:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.839 11:59:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:28.839 11:59:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.839 11:59:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:28.839 11:59:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:28.839 11:59:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.839 11:59:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:28.839 11:59:35 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:28.839 11:59:35 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:28.839 11:59:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:28.839 11:59:35 -- accel/accel.sh@42 -- # jq -r . 00:07:28.839 [2024-07-25 11:59:35.793395] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:28.839 [2024-07-25 11:59:35.793458] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197504 ] 00:07:28.839 [2024-07-25 11:59:35.878647] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.839 [2024-07-25 11:59:35.961035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.406 [2024-07-25 11:59:36.490093] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:29.406 [2024-07-25 11:59:36.492065] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1928420 PMD being used: compress_qat 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 [2024-07-25 11:59:36.496282] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x158a7c0 PMD being used: compress_qat 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=0x1 00:07:29.406 [2024-07-25 11:59:36.498159] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x18b4090 PMD being used: compress_qat 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=decompress 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=32 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=32 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=2 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val=Yes 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.406 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:29.406 11:59:36 -- accel/accel.sh@21 -- # val= 00:07:29.406 11:59:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.407 11:59:36 -- accel/accel.sh@20 -- # IFS=: 00:07:29.407 11:59:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@21 -- # val= 00:07:30.784 11:59:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # IFS=: 00:07:30.784 11:59:37 -- accel/accel.sh@20 -- # read -r var val 00:07:30.784 11:59:37 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:30.784 11:59:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:30.784 11:59:37 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:30.784 00:07:30.784 real 0m3.847s 00:07:30.784 user 0m3.006s 00:07:30.784 sys 0m0.828s 00:07:30.784 11:59:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.784 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.784 ************************************ 00:07:30.784 END TEST accel_cdev_decomp_mthread 00:07:30.784 ************************************ 00:07:30.784 11:59:37 -- accel/accel.sh@125 -- # run_test accel_cdev_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.784 11:59:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:30.784 11:59:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:30.784 11:59:37 -- common/autotest_common.sh@10 -- # set +x 00:07:30.784 ************************************ 00:07:30.784 START TEST accel_cdev_deomp_full_mthread 00:07:30.784 ************************************ 00:07:30.784 11:59:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.784 11:59:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:30.784 11:59:37 -- accel/accel.sh@17 -- # local accel_module 00:07:30.784 11:59:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.784 11:59:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:30.784 11:59:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.784 11:59:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.784 11:59:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.784 11:59:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.784 11:59:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.784 11:59:37 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:30.784 11:59:37 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:30.784 11:59:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.784 11:59:37 -- accel/accel.sh@42 -- # jq -r . 00:07:30.784 [2024-07-25 11:59:37.751045] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:30.784 [2024-07-25 11:59:37.751109] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197749 ] 00:07:30.784 [2024-07-25 11:59:37.836363] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.784 [2024-07-25 11:59:37.919124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.351 [2024-07-25 11:59:38.449266] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:31.351 [2024-07-25 11:59:38.451298] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x197d420 PMD being used: compress_qat 00:07:31.351 [2024-07-25 11:59:38.454694] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x15df520 PMD being used: compress_qat 00:07:31.351 [2024-07-25 11:59:38.456711] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x190cd40 PMD being used: compress_qat 00:07:32.727 11:59:39 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:32.727 00:07:32.727 SPDK Configuration: 00:07:32.727 Core mask: 0x1 00:07:32.727 00:07:32.727 Accel Perf Configuration: 00:07:32.727 Workload Type: decompress 00:07:32.727 Transfer size: 111250 bytes 00:07:32.727 Vector count 1 00:07:32.727 Module: dpdk_compressdev 00:07:32.727 File Name: /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:32.727 Queue depth: 32 00:07:32.727 Allocate depth: 32 00:07:32.727 # threads/core: 2 00:07:32.727 Run time: 1 seconds 00:07:32.727 Verify: Yes 00:07:32.727 00:07:32.727 Running for 1 seconds... 00:07:32.727 00:07:32.727 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:32.728 ------------------------------------------------------------------------------------ 00:07:32.728 0,1 26058/s 998 MiB/s 0 0 00:07:32.728 0,0 25994/s 996 MiB/s 0 0 00:07:32.728 ==================================================================================== 00:07:32.728 Total 52052/s 5522 MiB/s 0 0' 00:07:32.728 11:59:39 -- accel/accel.sh@20 -- # IFS=: 00:07:32.728 11:59:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.728 11:59:39 -- accel/accel.sh@20 -- # read -r var val 00:07:32.728 11:59:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:32.728 11:59:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.728 11:59:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.728 11:59:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.728 11:59:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.728 11:59:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.728 11:59:39 -- accel/accel.sh@37 -- # [[ -n 1 ]] 00:07:32.728 11:59:39 -- accel/accel.sh@38 -- # accel_json_cfg+=('{"method": "compressdev_scan_accel_module", "params":{"pmd": 0}}') 00:07:32.728 11:59:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.728 11:59:39 -- accel/accel.sh@42 -- # jq -r . 00:07:32.728 [2024-07-25 11:59:39.663966] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:32.728 [2024-07-25 11:59:39.664016] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197953 ] 00:07:32.728 [2024-07-25 11:59:39.751041] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.728 [2024-07-25 11:59:39.834711] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.295 [2024-07-25 11:59:40.370052] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:07:33.295 [2024-07-25 11:59:40.372053] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cf9420 PMD being used: compress_qat 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val=0x1 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 [2024-07-25 11:59:40.375431] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x195b520 PMD being used: compress_qat 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val=decompress 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 [2024-07-25 11:59:40.377436] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1c88d40 PMD being used: compress_qat 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val=dpdk_compressdev 00:07:33.295 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.295 11:59:40 -- accel/accel.sh@23 -- # accel_module=dpdk_compressdev 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.295 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.295 11:59:40 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/bib 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val=32 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val=32 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val=2 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val=Yes 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:33.296 11:59:40 -- accel/accel.sh@21 -- # val= 00:07:33.296 11:59:40 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # IFS=: 00:07:33.296 11:59:40 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@21 -- # val= 00:07:34.673 11:59:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # IFS=: 00:07:34.673 11:59:41 -- accel/accel.sh@20 -- # read -r var val 00:07:34.673 11:59:41 -- accel/accel.sh@28 -- # [[ -n dpdk_compressdev ]] 00:07:34.673 11:59:41 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.673 11:59:41 -- accel/accel.sh@28 -- # [[ dpdk_compressdev == \d\p\d\k\_\c\o\m\p\r\e\s\s\d\e\v ]] 00:07:34.673 00:07:34.673 real 0m3.848s 00:07:34.673 user 0m2.983s 00:07:34.673 sys 0m0.841s 00:07:34.673 11:59:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.673 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:34.673 ************************************ 00:07:34.673 END TEST accel_cdev_deomp_full_mthread 00:07:34.673 ************************************ 00:07:34.673 11:59:41 -- accel/accel.sh@126 -- # unset COMPRESSDEV 00:07:34.673 11:59:41 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.673 11:59:41 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:34.674 11:59:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.674 11:59:41 -- common/autotest_common.sh@10 -- # set +x 00:07:34.674 11:59:41 -- accel/accel.sh@129 -- # build_accel_config 00:07:34.674 11:59:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.674 11:59:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.674 11:59:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.674 11:59:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.674 11:59:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.674 11:59:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.674 11:59:41 -- accel/accel.sh@42 -- # jq -r . 00:07:34.674 ************************************ 00:07:34.674 START TEST accel_dif_functional_tests 00:07:34.674 ************************************ 00:07:34.674 11:59:41 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:34.674 [2024-07-25 11:59:41.639828] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:34.674 [2024-07-25 11:59:41.639875] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198301 ] 00:07:34.674 [2024-07-25 11:59:41.724622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:34.674 [2024-07-25 11:59:41.809395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.674 [2024-07-25 11:59:41.809483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:34.674 [2024-07-25 11:59:41.809486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.674 00:07:34.674 00:07:34.674 CUnit - A unit testing framework for C - Version 2.1-3 00:07:34.674 http://cunit.sourceforge.net/ 00:07:34.674 00:07:34.674 00:07:34.674 Suite: accel_dif 00:07:34.674 Test: verify: DIF generated, GUARD check ...passed 00:07:34.674 Test: verify: DIF generated, APPTAG check ...passed 00:07:34.674 Test: verify: DIF generated, REFTAG check ...passed 00:07:34.674 Test: verify: DIF not generated, GUARD check ...[2024-07-25 11:59:41.899998] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.674 [2024-07-25 11:59:41.900047] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:34.674 passed 00:07:34.674 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 11:59:41.900094] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.674 [2024-07-25 11:59:41.900112] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:34.674 passed 00:07:34.674 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 11:59:41.900135] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.674 [2024-07-25 11:59:41.900152] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:34.674 passed 00:07:34.674 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:34.674 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 11:59:41.900196] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:34.674 passed 00:07:34.674 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:34.674 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:34.674 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:34.674 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 11:59:41.900311] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:34.674 passed 00:07:34.674 Test: generate copy: DIF generated, GUARD check ...passed 00:07:34.674 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:34.674 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:34.674 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:34.674 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:34.674 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:34.674 Test: generate copy: iovecs-len validate ...[2024-07-25 11:59:41.900489] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:34.674 passed 00:07:34.674 Test: generate copy: buffer alignment validate ...passed 00:07:34.674 00:07:34.674 Run Summary: Type Total Ran Passed Failed Inactive 00:07:34.674 suites 1 1 n/a 0 0 00:07:34.674 tests 20 20 20 0 0 00:07:34.674 asserts 204 204 204 0 n/a 00:07:34.674 00:07:34.674 Elapsed time = 0.002 seconds 00:07:34.933 00:07:34.933 real 0m0.511s 00:07:34.933 user 0m0.750s 00:07:34.933 sys 0m0.183s 00:07:34.933 11:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.933 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.933 ************************************ 00:07:34.933 END TEST accel_dif_functional_tests 00:07:34.933 ************************************ 00:07:34.933 00:07:34.933 real 1m31.184s 00:07:34.933 user 1m51.112s 00:07:34.933 sys 0m15.010s 00:07:34.933 11:59:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.933 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.933 ************************************ 00:07:34.933 END TEST accel 00:07:34.933 ************************************ 00:07:34.933 11:59:42 -- spdk/autotest.sh@190 -- # run_test accel_rpc /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:34.933 11:59:42 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:34.933 11:59:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.933 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:34.933 ************************************ 00:07:34.933 START TEST accel_rpc 00:07:34.933 ************************************ 00:07:34.933 11:59:42 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:35.193 * Looking for test storage... 00:07:35.193 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/accel 00:07:35.193 11:59:42 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:35.193 11:59:42 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1198379 00:07:35.193 11:59:42 -- accel/accel_rpc.sh@15 -- # waitforlisten 1198379 00:07:35.193 11:59:42 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:35.193 11:59:42 -- common/autotest_common.sh@819 -- # '[' -z 1198379 ']' 00:07:35.193 11:59:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:35.193 11:59:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:35.193 11:59:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:35.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:35.193 11:59:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:35.193 11:59:42 -- common/autotest_common.sh@10 -- # set +x 00:07:35.193 [2024-07-25 11:59:42.339294] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:35.193 [2024-07-25 11:59:42.339356] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198379 ] 00:07:35.193 [2024-07-25 11:59:42.427849] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.452 [2024-07-25 11:59:42.515279] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:35.452 [2024-07-25 11:59:42.515409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.021 11:59:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:36.021 11:59:43 -- common/autotest_common.sh@852 -- # return 0 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:36.021 11:59:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.021 11:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.021 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 ************************************ 00:07:36.021 START TEST accel_assign_opcode 00:07:36.021 ************************************ 00:07:36.021 11:59:43 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:36.021 11:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.021 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.021 [2024-07-25 11:59:43.141288] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:36.021 11:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.021 11:59:43 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:36.021 11:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.022 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.022 [2024-07-25 11:59:43.149301] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:36.022 11:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.022 11:59:43 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:36.022 11:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.022 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 11:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.282 11:59:43 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:36.282 11:59:43 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:36.282 11:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:36.282 11:59:43 -- accel/accel_rpc.sh@42 -- # grep software 00:07:36.282 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 11:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:36.282 software 00:07:36.282 00:07:36.282 real 0m0.284s 00:07:36.282 user 0m0.044s 00:07:36.282 sys 0m0.015s 00:07:36.282 11:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.282 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.282 ************************************ 00:07:36.282 END TEST accel_assign_opcode 00:07:36.282 ************************************ 00:07:36.282 11:59:43 -- accel/accel_rpc.sh@55 -- # killprocess 1198379 00:07:36.282 11:59:43 -- common/autotest_common.sh@926 -- # '[' -z 1198379 ']' 00:07:36.282 11:59:43 -- common/autotest_common.sh@930 -- # kill -0 1198379 00:07:36.282 11:59:43 -- common/autotest_common.sh@931 -- # uname 00:07:36.282 11:59:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:36.282 11:59:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1198379 00:07:36.282 11:59:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:36.282 11:59:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:36.282 11:59:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1198379' 00:07:36.282 killing process with pid 1198379 00:07:36.282 11:59:43 -- common/autotest_common.sh@945 -- # kill 1198379 00:07:36.282 11:59:43 -- common/autotest_common.sh@950 -- # wait 1198379 00:07:36.574 00:07:36.574 real 0m1.682s 00:07:36.574 user 0m1.643s 00:07:36.574 sys 0m0.510s 00:07:36.574 11:59:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:36.574 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.574 ************************************ 00:07:36.574 END TEST accel_rpc 00:07:36.574 ************************************ 00:07:36.833 11:59:43 -- spdk/autotest.sh@191 -- # run_test app_cmdline /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.833 11:59:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:36.833 11:59:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:36.833 11:59:43 -- common/autotest_common.sh@10 -- # set +x 00:07:36.833 ************************************ 00:07:36.833 START TEST app_cmdline 00:07:36.833 ************************************ 00:07:36.833 11:59:43 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/cmdline.sh 00:07:36.833 * Looking for test storage... 00:07:36.833 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:36.833 11:59:44 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:36.833 11:59:44 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1198790 00:07:36.833 11:59:44 -- app/cmdline.sh@18 -- # waitforlisten 1198790 00:07:36.833 11:59:44 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:36.833 11:59:44 -- common/autotest_common.sh@819 -- # '[' -z 1198790 ']' 00:07:36.833 11:59:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.833 11:59:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:36.833 11:59:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.833 11:59:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:36.833 11:59:44 -- common/autotest_common.sh@10 -- # set +x 00:07:36.833 [2024-07-25 11:59:44.092717] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:36.833 [2024-07-25 11:59:44.092784] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198790 ] 00:07:37.092 [2024-07-25 11:59:44.178129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.092 [2024-07-25 11:59:44.263185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:37.092 [2024-07-25 11:59:44.263330] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.659 11:59:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:37.659 11:59:44 -- common/autotest_common.sh@852 -- # return 0 00:07:37.659 11:59:44 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:37.918 { 00:07:37.918 "version": "SPDK v24.01.1-pre git sha1 dbef7efac", 00:07:37.918 "fields": { 00:07:37.918 "major": 24, 00:07:37.918 "minor": 1, 00:07:37.918 "patch": 1, 00:07:37.918 "suffix": "-pre", 00:07:37.918 "commit": "dbef7efac" 00:07:37.918 } 00:07:37.918 } 00:07:37.918 11:59:45 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:37.918 11:59:45 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:37.918 11:59:45 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:37.918 11:59:45 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:37.918 11:59:45 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:37.918 11:59:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:37.918 11:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:37.918 11:59:45 -- app/cmdline.sh@26 -- # sort 00:07:37.918 11:59:45 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:37.918 11:59:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:37.918 11:59:45 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:37.918 11:59:45 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:37.918 11:59:45 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.918 11:59:45 -- common/autotest_common.sh@640 -- # local es=0 00:07:37.918 11:59:45 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:37.918 11:59:45 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:37.918 11:59:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.918 11:59:45 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:37.918 11:59:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.918 11:59:45 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:37.918 11:59:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:37.918 11:59:45 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:07:37.918 11:59:45 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:07:37.918 11:59:45 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:38.177 request: 00:07:38.177 { 00:07:38.177 "method": "env_dpdk_get_mem_stats", 00:07:38.177 "req_id": 1 00:07:38.177 } 00:07:38.177 Got JSON-RPC error response 00:07:38.177 response: 00:07:38.177 { 00:07:38.177 "code": -32601, 00:07:38.177 "message": "Method not found" 00:07:38.177 } 00:07:38.177 11:59:45 -- common/autotest_common.sh@643 -- # es=1 00:07:38.177 11:59:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:38.177 11:59:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:38.177 11:59:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:38.177 11:59:45 -- app/cmdline.sh@1 -- # killprocess 1198790 00:07:38.177 11:59:45 -- common/autotest_common.sh@926 -- # '[' -z 1198790 ']' 00:07:38.177 11:59:45 -- common/autotest_common.sh@930 -- # kill -0 1198790 00:07:38.177 11:59:45 -- common/autotest_common.sh@931 -- # uname 00:07:38.177 11:59:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:38.177 11:59:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1198790 00:07:38.177 11:59:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:38.177 11:59:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:38.177 11:59:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1198790' 00:07:38.177 killing process with pid 1198790 00:07:38.177 11:59:45 -- common/autotest_common.sh@945 -- # kill 1198790 00:07:38.177 11:59:45 -- common/autotest_common.sh@950 -- # wait 1198790 00:07:38.436 00:07:38.436 real 0m1.785s 00:07:38.436 user 0m2.010s 00:07:38.436 sys 0m0.537s 00:07:38.436 11:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.436 11:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.436 ************************************ 00:07:38.436 END TEST app_cmdline 00:07:38.436 ************************************ 00:07:38.696 11:59:45 -- spdk/autotest.sh@192 -- # run_test version /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:38.696 11:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.696 11:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.696 11:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 ************************************ 00:07:38.696 START TEST version 00:07:38.696 ************************************ 00:07:38.696 11:59:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/version.sh 00:07:38.696 * Looking for test storage... 00:07:38.696 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:07:38.696 11:59:45 -- app/version.sh@17 -- # get_header_version major 00:07:38.696 11:59:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:38.696 11:59:45 -- app/version.sh@14 -- # tr -d '"' 00:07:38.696 11:59:45 -- app/version.sh@14 -- # cut -f2 00:07:38.696 11:59:45 -- app/version.sh@17 -- # major=24 00:07:38.696 11:59:45 -- app/version.sh@18 -- # get_header_version minor 00:07:38.696 11:59:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:38.696 11:59:45 -- app/version.sh@14 -- # cut -f2 00:07:38.696 11:59:45 -- app/version.sh@14 -- # tr -d '"' 00:07:38.696 11:59:45 -- app/version.sh@18 -- # minor=1 00:07:38.696 11:59:45 -- app/version.sh@19 -- # get_header_version patch 00:07:38.696 11:59:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:38.696 11:59:45 -- app/version.sh@14 -- # cut -f2 00:07:38.696 11:59:45 -- app/version.sh@14 -- # tr -d '"' 00:07:38.696 11:59:45 -- app/version.sh@19 -- # patch=1 00:07:38.696 11:59:45 -- app/version.sh@20 -- # get_header_version suffix 00:07:38.696 11:59:45 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/version.h 00:07:38.696 11:59:45 -- app/version.sh@14 -- # cut -f2 00:07:38.696 11:59:45 -- app/version.sh@14 -- # tr -d '"' 00:07:38.696 11:59:45 -- app/version.sh@20 -- # suffix=-pre 00:07:38.696 11:59:45 -- app/version.sh@22 -- # version=24.1 00:07:38.696 11:59:45 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:38.696 11:59:45 -- app/version.sh@25 -- # version=24.1.1 00:07:38.696 11:59:45 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:38.696 11:59:45 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:07:38.696 11:59:45 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:38.696 11:59:45 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:38.696 11:59:45 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:38.696 00:07:38.696 real 0m0.175s 00:07:38.696 user 0m0.094s 00:07:38.696 sys 0m0.125s 00:07:38.696 11:59:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:38.696 11:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 ************************************ 00:07:38.696 END TEST version 00:07:38.696 ************************************ 00:07:38.696 11:59:45 -- spdk/autotest.sh@194 -- # '[' 1 -eq 1 ']' 00:07:38.696 11:59:45 -- spdk/autotest.sh@195 -- # run_test blockdev_general /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:38.696 11:59:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:38.696 11:59:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:38.696 11:59:45 -- common/autotest_common.sh@10 -- # set +x 00:07:38.696 ************************************ 00:07:38.696 START TEST blockdev_general 00:07:38.696 ************************************ 00:07:38.696 11:59:45 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/blockdev.sh 00:07:38.955 * Looking for test storage... 00:07:38.955 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:07:38.955 11:59:46 -- bdev/blockdev.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:38.955 11:59:46 -- bdev/nbd_common.sh@6 -- # set -e 00:07:38.955 11:59:46 -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:07:38.955 11:59:46 -- bdev/blockdev.sh@13 -- # conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:38.955 11:59:46 -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json 00:07:38.955 11:59:46 -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json 00:07:38.955 11:59:46 -- bdev/blockdev.sh@18 -- # : 00:07:38.955 11:59:46 -- bdev/blockdev.sh@668 -- # QOS_DEV_1=Malloc_0 00:07:38.955 11:59:46 -- bdev/blockdev.sh@669 -- # QOS_DEV_2=Null_1 00:07:38.955 11:59:46 -- bdev/blockdev.sh@670 -- # QOS_RUN_TIME=5 00:07:38.955 11:59:46 -- bdev/blockdev.sh@672 -- # uname -s 00:07:38.955 11:59:46 -- bdev/blockdev.sh@672 -- # '[' Linux = Linux ']' 00:07:38.955 11:59:46 -- bdev/blockdev.sh@674 -- # PRE_RESERVED_MEM=0 00:07:38.955 11:59:46 -- bdev/blockdev.sh@680 -- # test_type=bdev 00:07:38.955 11:59:46 -- bdev/blockdev.sh@681 -- # crypto_device= 00:07:38.955 11:59:46 -- bdev/blockdev.sh@682 -- # dek= 00:07:38.955 11:59:46 -- bdev/blockdev.sh@683 -- # env_ctx= 00:07:38.955 11:59:46 -- bdev/blockdev.sh@684 -- # wait_for_rpc= 00:07:38.955 11:59:46 -- bdev/blockdev.sh@685 -- # '[' -n '' ']' 00:07:38.955 11:59:46 -- bdev/blockdev.sh@688 -- # [[ bdev == bdev ]] 00:07:38.955 11:59:46 -- bdev/blockdev.sh@689 -- # wait_for_rpc=--wait-for-rpc 00:07:38.955 11:59:46 -- bdev/blockdev.sh@691 -- # start_spdk_tgt 00:07:38.955 11:59:46 -- bdev/blockdev.sh@45 -- # spdk_tgt_pid=1199096 00:07:38.955 11:59:46 -- bdev/blockdev.sh@46 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:38.955 11:59:46 -- bdev/blockdev.sh@47 -- # waitforlisten 1199096 00:07:38.955 11:59:46 -- bdev/blockdev.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:07:38.955 11:59:46 -- common/autotest_common.sh@819 -- # '[' -z 1199096 ']' 00:07:38.955 11:59:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.955 11:59:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:38.955 11:59:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.956 11:59:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:38.956 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:38.956 [2024-07-25 11:59:46.163792] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:38.956 [2024-07-25 11:59:46.163855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199096 ] 00:07:38.956 [2024-07-25 11:59:46.251842] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.214 [2024-07-25 11:59:46.337183] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:39.214 [2024-07-25 11:59:46.337339] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.782 11:59:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:39.782 11:59:46 -- common/autotest_common.sh@852 -- # return 0 00:07:39.782 11:59:46 -- bdev/blockdev.sh@692 -- # case "$test_type" in 00:07:39.782 11:59:46 -- bdev/blockdev.sh@694 -- # setup_bdev_conf 00:07:39.782 11:59:46 -- bdev/blockdev.sh@51 -- # rpc_cmd 00:07:39.782 11:59:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:39.782 11:59:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.041 [2024-07-25 11:59:47.184410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:40.041 [2024-07-25 11:59:47.184457] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:40.041 00:07:40.041 [2024-07-25 11:59:47.192394] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:40.041 [2024-07-25 11:59:47.192410] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:40.041 00:07:40.041 Malloc0 00:07:40.041 Malloc1 00:07:40.041 Malloc2 00:07:40.041 Malloc3 00:07:40.041 Malloc4 00:07:40.041 Malloc5 00:07:40.041 Malloc6 00:07:40.041 Malloc7 00:07:40.041 Malloc8 00:07:40.041 Malloc9 00:07:40.041 [2024-07-25 11:59:47.331029] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:40.041 [2024-07-25 11:59:47.331067] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:40.041 [2024-07-25 11:59:47.331083] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a119c0 00:07:40.041 [2024-07-25 11:59:47.331092] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:40.041 [2024-07-25 11:59:47.332061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:40.041 [2024-07-25 11:59:47.332084] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:40.041 TestPT 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@74 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile bs=2048 count=5000 00:07:40.299 5000+0 records in 00:07:40.299 5000+0 records out 00:07:40.299 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0246416 s, 416 MB/s 00:07:40.299 11:59:47 -- bdev/blockdev.sh@75 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile AIO0 2048 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 AIO0 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@735 -- # rpc_cmd bdev_wait_for_examine 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@738 -- # cat 00:07:40.299 11:59:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n accel 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n bdev 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@738 -- # rpc_cmd save_subsystem_config -n iobuf 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.299 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.299 11:59:47 -- bdev/blockdev.sh@746 -- # mapfile -t bdevs 00:07:40.299 11:59:47 -- bdev/blockdev.sh@746 -- # rpc_cmd bdev_get_bdevs 00:07:40.299 11:59:47 -- bdev/blockdev.sh@746 -- # jq -r '.[] | select(.claimed == false)' 00:07:40.299 11:59:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.299 11:59:47 -- common/autotest_common.sh@10 -- # set +x 00:07:40.559 11:59:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.559 11:59:47 -- bdev/blockdev.sh@747 -- # mapfile -t bdevs_name 00:07:40.559 11:59:47 -- bdev/blockdev.sh@747 -- # jq -r .name 00:07:40.560 11:59:47 -- bdev/blockdev.sh@747 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "bb6df1fd-e622-4cd2-956b-65e624ed6208"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb6df1fd-e622-4cd2-956b-65e624ed6208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9930b069-d32d-5045-9425-1fd88cdf0791"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9930b069-d32d-5045-9425-1fd88cdf0791",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14cd1f67-23b6-52cc-86d5-236d242c489b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14cd1f67-23b6-52cc-86d5-236d242c489b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fde8c720-c24a-5c5f-bfae-808407523f7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fde8c720-c24a-5c5f-bfae-808407523f7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6493a58f-cda7-556d-ad3f-93218b47db12"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6493a58f-cda7-556d-ad3f-93218b47db12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7b75ec33-db35-521f-be99-0960aa483abb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7b75ec33-db35-521f-be99-0960aa483abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "02d12f55-28b3-5ac1-b2c2-4272735a3a65"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "02d12f55-28b3-5ac1-b2c2-4272735a3a65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a46813e8-c647-54ca-a5b1-cb3e8db17eaa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a46813e8-c647-54ca-a5b1-cb3e8db17eaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4243129a-6433-54d9-828b-15134ca43904"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4243129a-6433-54d9-828b-15134ca43904",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9c5706c6-0511-51c8-9cbc-f30f7b9baaca"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9c5706c6-0511-51c8-9cbc-f30f7b9baaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "33d8955f-cf34-40f7-910d-0c7399dc3a00"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8dc5ae78-0c30-442d-bfa7-722af2dd7886",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b7d93b0a-2932-44bc-9c13-f83def8b8034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "587b6824-56dc-4cf5-830c-4a785dcce732",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "83dcc151-a468-48b6-8fda-05491b265630",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ce70b93d-f655-49de-9b59-4d053ef2dfe6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "22a3ea40-eb54-440b-b591-8871597df5a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5f6e0c7a-d3b6-4745-af20-a5acdee3a0b8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "87095e70-a15d-4787-9513-dbfcd77568e7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "87095e70-a15d-4787-9513-dbfcd77568e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:07:40.560 11:59:47 -- bdev/blockdev.sh@748 -- # bdev_list=("${bdevs_name[@]}") 00:07:40.560 11:59:47 -- bdev/blockdev.sh@750 -- # hello_world_bdev=Malloc0 00:07:40.560 11:59:47 -- bdev/blockdev.sh@751 -- # trap - SIGINT SIGTERM EXIT 00:07:40.560 11:59:47 -- bdev/blockdev.sh@752 -- # killprocess 1199096 00:07:40.560 11:59:47 -- common/autotest_common.sh@926 -- # '[' -z 1199096 ']' 00:07:40.560 11:59:47 -- common/autotest_common.sh@930 -- # kill -0 1199096 00:07:40.560 11:59:47 -- common/autotest_common.sh@931 -- # uname 00:07:40.560 11:59:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:40.560 11:59:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1199096 00:07:40.560 11:59:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:40.560 11:59:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:40.560 11:59:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1199096' 00:07:40.560 killing process with pid 1199096 00:07:40.560 11:59:47 -- common/autotest_common.sh@945 -- # kill 1199096 00:07:40.560 11:59:47 -- common/autotest_common.sh@950 -- # wait 1199096 00:07:41.128 11:59:48 -- bdev/blockdev.sh@756 -- # trap cleanup SIGINT SIGTERM EXIT 00:07:41.128 11:59:48 -- bdev/blockdev.sh@758 -- # run_test bdev_hello_world /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:41.128 11:59:48 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:41.128 11:59:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.128 11:59:48 -- common/autotest_common.sh@10 -- # set +x 00:07:41.128 ************************************ 00:07:41.128 START TEST bdev_hello_world 00:07:41.128 ************************************ 00:07:41.128 11:59:48 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/hello_bdev --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -b Malloc0 '' 00:07:41.128 [2024-07-25 11:59:48.288967] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:41.128 [2024-07-25 11:59:48.289015] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199467 ] 00:07:41.128 [2024-07-25 11:59:48.375646] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.387 [2024-07-25 11:59:48.460738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.387 [2024-07-25 11:59:48.604739] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:41.387 [2024-07-25 11:59:48.604779] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:41.387 [2024-07-25 11:59:48.604789] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:41.387 [2024-07-25 11:59:48.612750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:41.387 [2024-07-25 11:59:48.612767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:41.387 [2024-07-25 11:59:48.620761] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:41.387 [2024-07-25 11:59:48.620776] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:41.387 [2024-07-25 11:59:48.690746] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:41.387 [2024-07-25 11:59:48.690788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.387 [2024-07-25 11:59:48.690800] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c218c0 00:07:41.387 [2024-07-25 11:59:48.690809] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.387 [2024-07-25 11:59:48.691856] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.387 [2024-07-25 11:59:48.691878] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:41.646 [2024-07-25 11:59:48.824983] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:07:41.646 [2024-07-25 11:59:48.825022] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:07:41.646 [2024-07-25 11:59:48.825048] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:07:41.646 [2024-07-25 11:59:48.825081] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:07:41.646 [2024-07-25 11:59:48.825120] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:07:41.646 [2024-07-25 11:59:48.825138] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:07:41.646 [2024-07-25 11:59:48.825166] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:07:41.646 00:07:41.646 [2024-07-25 11:59:48.825184] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:07:41.905 00:07:41.905 real 0m0.913s 00:07:41.905 user 0m0.627s 00:07:41.905 sys 0m0.255s 00:07:41.905 11:59:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.905 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.905 ************************************ 00:07:41.905 END TEST bdev_hello_world 00:07:41.905 ************************************ 00:07:41.905 11:59:49 -- bdev/blockdev.sh@759 -- # run_test bdev_bounds bdev_bounds '' 00:07:41.905 11:59:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:41.905 11:59:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.905 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.905 ************************************ 00:07:41.905 START TEST bdev_bounds 00:07:41.905 ************************************ 00:07:41.905 11:59:49 -- common/autotest_common.sh@1104 -- # bdev_bounds '' 00:07:41.905 11:59:49 -- bdev/blockdev.sh@288 -- # bdevio_pid=1199611 00:07:41.905 11:59:49 -- bdev/blockdev.sh@289 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:07:41.905 11:59:49 -- bdev/blockdev.sh@290 -- # echo 'Process bdevio pid: 1199611' 00:07:41.905 Process bdevio pid: 1199611 00:07:41.905 11:59:49 -- bdev/blockdev.sh@291 -- # waitforlisten 1199611 00:07:41.905 11:59:49 -- common/autotest_common.sh@819 -- # '[' -z 1199611 ']' 00:07:41.905 11:59:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.905 11:59:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.905 11:59:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.905 11:59:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.905 11:59:49 -- common/autotest_common.sh@10 -- # set +x 00:07:41.905 11:59:49 -- bdev/blockdev.sh@287 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:42.164 [2024-07-25 11:59:49.243544] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:42.164 [2024-07-25 11:59:49.243594] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199611 ] 00:07:42.164 [2024-07-25 11:59:49.330493] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:42.164 [2024-07-25 11:59:49.421643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:42.164 [2024-07-25 11:59:49.421732] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:42.164 [2024-07-25 11:59:49.421734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.423 [2024-07-25 11:59:49.572619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:42.423 [2024-07-25 11:59:49.572666] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:42.423 [2024-07-25 11:59:49.572692] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:42.423 [2024-07-25 11:59:49.580635] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:42.423 [2024-07-25 11:59:49.580653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:42.423 [2024-07-25 11:59:49.588648] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:42.423 [2024-07-25 11:59:49.588664] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:42.423 [2024-07-25 11:59:49.662158] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:42.423 [2024-07-25 11:59:49.662199] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:42.423 [2024-07-25 11:59:49.662227] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x26841d0 00:07:42.423 [2024-07-25 11:59:49.662236] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:42.423 [2024-07-25 11:59:49.663384] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:42.423 [2024-07-25 11:59:49.663406] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:42.991 11:59:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.991 11:59:50 -- common/autotest_common.sh@852 -- # return 0 00:07:42.991 11:59:50 -- bdev/blockdev.sh@292 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:07:42.991 I/O targets: 00:07:42.991 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:07:42.991 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:07:42.991 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:07:42.991 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:07:42.991 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:07:42.991 raid0: 131072 blocks of 512 bytes (64 MiB) 00:07:42.991 concat0: 131072 blocks of 512 bytes (64 MiB) 00:07:42.991 raid1: 65536 blocks of 512 bytes (32 MiB) 00:07:42.991 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:07:42.991 00:07:42.991 00:07:42.991 CUnit - A unit testing framework for C - Version 2.1-3 00:07:42.991 http://cunit.sourceforge.net/ 00:07:42.991 00:07:42.991 00:07:42.991 Suite: bdevio tests on: AIO0 00:07:42.991 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: raid1 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: concat0 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: raid0 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: TestPT 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: Malloc2p7 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.992 Test: blockdev comparev and writev ...passed 00:07:42.992 Test: blockdev nvme passthru rw ...passed 00:07:42.992 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.992 Test: blockdev nvme admin passthru ...passed 00:07:42.992 Test: blockdev copy ...passed 00:07:42.992 Suite: bdevio tests on: Malloc2p6 00:07:42.992 Test: blockdev write read block ...passed 00:07:42.992 Test: blockdev write zeroes read block ...passed 00:07:42.992 Test: blockdev write zeroes read no split ...passed 00:07:42.992 Test: blockdev write zeroes read split ...passed 00:07:42.992 Test: blockdev write zeroes read split partial ...passed 00:07:42.992 Test: blockdev reset ...passed 00:07:42.992 Test: blockdev write read 8 blocks ...passed 00:07:42.992 Test: blockdev write read size > 128k ...passed 00:07:42.992 Test: blockdev write read invalid size ...passed 00:07:42.992 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.992 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.992 Test: blockdev write read max offset ...passed 00:07:42.992 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.992 Test: blockdev writev readv 8 blocks ...passed 00:07:42.992 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.992 Test: blockdev writev readv block ...passed 00:07:42.992 Test: blockdev writev readv size > 128k ...passed 00:07:42.992 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.993 Test: blockdev comparev and writev ...passed 00:07:42.993 Test: blockdev nvme passthru rw ...passed 00:07:42.993 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.993 Test: blockdev nvme admin passthru ...passed 00:07:42.993 Test: blockdev copy ...passed 00:07:42.993 Suite: bdevio tests on: Malloc2p5 00:07:42.993 Test: blockdev write read block ...passed 00:07:42.993 Test: blockdev write zeroes read block ...passed 00:07:42.993 Test: blockdev write zeroes read no split ...passed 00:07:42.993 Test: blockdev write zeroes read split ...passed 00:07:42.993 Test: blockdev write zeroes read split partial ...passed 00:07:42.993 Test: blockdev reset ...passed 00:07:42.993 Test: blockdev write read 8 blocks ...passed 00:07:42.993 Test: blockdev write read size > 128k ...passed 00:07:42.993 Test: blockdev write read invalid size ...passed 00:07:42.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.993 Test: blockdev write read max offset ...passed 00:07:42.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.993 Test: blockdev writev readv 8 blocks ...passed 00:07:42.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.993 Test: blockdev writev readv block ...passed 00:07:42.993 Test: blockdev writev readv size > 128k ...passed 00:07:42.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.993 Test: blockdev comparev and writev ...passed 00:07:42.993 Test: blockdev nvme passthru rw ...passed 00:07:42.993 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.993 Test: blockdev nvme admin passthru ...passed 00:07:42.993 Test: blockdev copy ...passed 00:07:42.993 Suite: bdevio tests on: Malloc2p4 00:07:42.993 Test: blockdev write read block ...passed 00:07:42.993 Test: blockdev write zeroes read block ...passed 00:07:42.993 Test: blockdev write zeroes read no split ...passed 00:07:42.993 Test: blockdev write zeroes read split ...passed 00:07:42.993 Test: blockdev write zeroes read split partial ...passed 00:07:42.993 Test: blockdev reset ...passed 00:07:42.993 Test: blockdev write read 8 blocks ...passed 00:07:42.993 Test: blockdev write read size > 128k ...passed 00:07:42.993 Test: blockdev write read invalid size ...passed 00:07:42.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.993 Test: blockdev write read max offset ...passed 00:07:42.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.993 Test: blockdev writev readv 8 blocks ...passed 00:07:42.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.993 Test: blockdev writev readv block ...passed 00:07:42.993 Test: blockdev writev readv size > 128k ...passed 00:07:42.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.993 Test: blockdev comparev and writev ...passed 00:07:42.993 Test: blockdev nvme passthru rw ...passed 00:07:42.993 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.993 Test: blockdev nvme admin passthru ...passed 00:07:42.993 Test: blockdev copy ...passed 00:07:42.993 Suite: bdevio tests on: Malloc2p3 00:07:42.993 Test: blockdev write read block ...passed 00:07:42.993 Test: blockdev write zeroes read block ...passed 00:07:42.993 Test: blockdev write zeroes read no split ...passed 00:07:42.993 Test: blockdev write zeroes read split ...passed 00:07:42.993 Test: blockdev write zeroes read split partial ...passed 00:07:42.993 Test: blockdev reset ...passed 00:07:42.993 Test: blockdev write read 8 blocks ...passed 00:07:42.993 Test: blockdev write read size > 128k ...passed 00:07:42.993 Test: blockdev write read invalid size ...passed 00:07:42.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.993 Test: blockdev write read max offset ...passed 00:07:42.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.993 Test: blockdev writev readv 8 blocks ...passed 00:07:42.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.993 Test: blockdev writev readv block ...passed 00:07:42.993 Test: blockdev writev readv size > 128k ...passed 00:07:42.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.993 Test: blockdev comparev and writev ...passed 00:07:42.993 Test: blockdev nvme passthru rw ...passed 00:07:42.993 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.993 Test: blockdev nvme admin passthru ...passed 00:07:42.993 Test: blockdev copy ...passed 00:07:42.993 Suite: bdevio tests on: Malloc2p2 00:07:42.993 Test: blockdev write read block ...passed 00:07:42.993 Test: blockdev write zeroes read block ...passed 00:07:42.993 Test: blockdev write zeroes read no split ...passed 00:07:42.993 Test: blockdev write zeroes read split ...passed 00:07:42.993 Test: blockdev write zeroes read split partial ...passed 00:07:42.993 Test: blockdev reset ...passed 00:07:42.993 Test: blockdev write read 8 blocks ...passed 00:07:42.993 Test: blockdev write read size > 128k ...passed 00:07:42.993 Test: blockdev write read invalid size ...passed 00:07:42.993 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:42.993 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:42.993 Test: blockdev write read max offset ...passed 00:07:42.993 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:42.993 Test: blockdev writev readv 8 blocks ...passed 00:07:42.993 Test: blockdev writev readv 30 x 1block ...passed 00:07:42.993 Test: blockdev writev readv block ...passed 00:07:42.993 Test: blockdev writev readv size > 128k ...passed 00:07:42.993 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:42.993 Test: blockdev comparev and writev ...passed 00:07:42.993 Test: blockdev nvme passthru rw ...passed 00:07:42.993 Test: blockdev nvme passthru vendor specific ...passed 00:07:42.993 Test: blockdev nvme admin passthru ...passed 00:07:42.993 Test: blockdev copy ...passed 00:07:42.993 Suite: bdevio tests on: Malloc2p1 00:07:42.993 Test: blockdev write read block ...passed 00:07:42.993 Test: blockdev write zeroes read block ...passed 00:07:42.993 Test: blockdev write zeroes read no split ...passed 00:07:43.252 Test: blockdev write zeroes read split ...passed 00:07:43.252 Test: blockdev write zeroes read split partial ...passed 00:07:43.252 Test: blockdev reset ...passed 00:07:43.252 Test: blockdev write read 8 blocks ...passed 00:07:43.252 Test: blockdev write read size > 128k ...passed 00:07:43.252 Test: blockdev write read invalid size ...passed 00:07:43.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.252 Test: blockdev write read max offset ...passed 00:07:43.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.252 Test: blockdev writev readv 8 blocks ...passed 00:07:43.252 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.252 Test: blockdev writev readv block ...passed 00:07:43.252 Test: blockdev writev readv size > 128k ...passed 00:07:43.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.252 Test: blockdev comparev and writev ...passed 00:07:43.252 Test: blockdev nvme passthru rw ...passed 00:07:43.252 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.252 Test: blockdev nvme admin passthru ...passed 00:07:43.252 Test: blockdev copy ...passed 00:07:43.252 Suite: bdevio tests on: Malloc2p0 00:07:43.252 Test: blockdev write read block ...passed 00:07:43.252 Test: blockdev write zeroes read block ...passed 00:07:43.252 Test: blockdev write zeroes read no split ...passed 00:07:43.252 Test: blockdev write zeroes read split ...passed 00:07:43.252 Test: blockdev write zeroes read split partial ...passed 00:07:43.252 Test: blockdev reset ...passed 00:07:43.252 Test: blockdev write read 8 blocks ...passed 00:07:43.252 Test: blockdev write read size > 128k ...passed 00:07:43.252 Test: blockdev write read invalid size ...passed 00:07:43.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.252 Test: blockdev write read max offset ...passed 00:07:43.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.252 Test: blockdev writev readv 8 blocks ...passed 00:07:43.252 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.252 Test: blockdev writev readv block ...passed 00:07:43.252 Test: blockdev writev readv size > 128k ...passed 00:07:43.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.252 Test: blockdev comparev and writev ...passed 00:07:43.252 Test: blockdev nvme passthru rw ...passed 00:07:43.252 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.252 Test: blockdev nvme admin passthru ...passed 00:07:43.252 Test: blockdev copy ...passed 00:07:43.252 Suite: bdevio tests on: Malloc1p1 00:07:43.252 Test: blockdev write read block ...passed 00:07:43.252 Test: blockdev write zeroes read block ...passed 00:07:43.252 Test: blockdev write zeroes read no split ...passed 00:07:43.252 Test: blockdev write zeroes read split ...passed 00:07:43.252 Test: blockdev write zeroes read split partial ...passed 00:07:43.252 Test: blockdev reset ...passed 00:07:43.252 Test: blockdev write read 8 blocks ...passed 00:07:43.252 Test: blockdev write read size > 128k ...passed 00:07:43.252 Test: blockdev write read invalid size ...passed 00:07:43.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.252 Test: blockdev write read max offset ...passed 00:07:43.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.252 Test: blockdev writev readv 8 blocks ...passed 00:07:43.252 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.252 Test: blockdev writev readv block ...passed 00:07:43.252 Test: blockdev writev readv size > 128k ...passed 00:07:43.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.252 Test: blockdev comparev and writev ...passed 00:07:43.252 Test: blockdev nvme passthru rw ...passed 00:07:43.252 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.252 Test: blockdev nvme admin passthru ...passed 00:07:43.252 Test: blockdev copy ...passed 00:07:43.252 Suite: bdevio tests on: Malloc1p0 00:07:43.252 Test: blockdev write read block ...passed 00:07:43.252 Test: blockdev write zeroes read block ...passed 00:07:43.252 Test: blockdev write zeroes read no split ...passed 00:07:43.252 Test: blockdev write zeroes read split ...passed 00:07:43.252 Test: blockdev write zeroes read split partial ...passed 00:07:43.252 Test: blockdev reset ...passed 00:07:43.252 Test: blockdev write read 8 blocks ...passed 00:07:43.252 Test: blockdev write read size > 128k ...passed 00:07:43.252 Test: blockdev write read invalid size ...passed 00:07:43.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.252 Test: blockdev write read max offset ...passed 00:07:43.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.252 Test: blockdev writev readv 8 blocks ...passed 00:07:43.252 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.252 Test: blockdev writev readv block ...passed 00:07:43.252 Test: blockdev writev readv size > 128k ...passed 00:07:43.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.252 Test: blockdev comparev and writev ...passed 00:07:43.252 Test: blockdev nvme passthru rw ...passed 00:07:43.252 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.252 Test: blockdev nvme admin passthru ...passed 00:07:43.252 Test: blockdev copy ...passed 00:07:43.252 Suite: bdevio tests on: Malloc0 00:07:43.252 Test: blockdev write read block ...passed 00:07:43.252 Test: blockdev write zeroes read block ...passed 00:07:43.252 Test: blockdev write zeroes read no split ...passed 00:07:43.252 Test: blockdev write zeroes read split ...passed 00:07:43.252 Test: blockdev write zeroes read split partial ...passed 00:07:43.252 Test: blockdev reset ...passed 00:07:43.252 Test: blockdev write read 8 blocks ...passed 00:07:43.252 Test: blockdev write read size > 128k ...passed 00:07:43.252 Test: blockdev write read invalid size ...passed 00:07:43.252 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:07:43.252 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:07:43.252 Test: blockdev write read max offset ...passed 00:07:43.252 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:07:43.252 Test: blockdev writev readv 8 blocks ...passed 00:07:43.252 Test: blockdev writev readv 30 x 1block ...passed 00:07:43.252 Test: blockdev writev readv block ...passed 00:07:43.252 Test: blockdev writev readv size > 128k ...passed 00:07:43.252 Test: blockdev writev readv size > 128k in two iovs ...passed 00:07:43.253 Test: blockdev comparev and writev ...passed 00:07:43.253 Test: blockdev nvme passthru rw ...passed 00:07:43.253 Test: blockdev nvme passthru vendor specific ...passed 00:07:43.253 Test: blockdev nvme admin passthru ...passed 00:07:43.253 Test: blockdev copy ...passed 00:07:43.253 00:07:43.253 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.253 suites 16 16 n/a 0 0 00:07:43.253 tests 368 368 368 0 0 00:07:43.253 asserts 2224 2224 2224 0 n/a 00:07:43.253 00:07:43.253 Elapsed time = 0.494 seconds 00:07:43.253 0 00:07:43.253 11:59:50 -- bdev/blockdev.sh@293 -- # killprocess 1199611 00:07:43.253 11:59:50 -- common/autotest_common.sh@926 -- # '[' -z 1199611 ']' 00:07:43.253 11:59:50 -- common/autotest_common.sh@930 -- # kill -0 1199611 00:07:43.253 11:59:50 -- common/autotest_common.sh@931 -- # uname 00:07:43.253 11:59:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:43.253 11:59:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1199611 00:07:43.253 11:59:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:43.253 11:59:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:43.253 11:59:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1199611' 00:07:43.253 killing process with pid 1199611 00:07:43.253 11:59:50 -- common/autotest_common.sh@945 -- # kill 1199611 00:07:43.253 11:59:50 -- common/autotest_common.sh@950 -- # wait 1199611 00:07:43.511 11:59:50 -- bdev/blockdev.sh@294 -- # trap - SIGINT SIGTERM EXIT 00:07:43.511 00:07:43.511 real 0m1.551s 00:07:43.511 user 0m3.791s 00:07:43.511 sys 0m0.432s 00:07:43.511 11:59:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:43.511 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.511 ************************************ 00:07:43.511 END TEST bdev_bounds 00:07:43.511 ************************************ 00:07:43.511 11:59:50 -- bdev/blockdev.sh@760 -- # run_test bdev_nbd nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:43.511 11:59:50 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:07:43.511 11:59:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.511 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.511 ************************************ 00:07:43.511 START TEST bdev_nbd 00:07:43.511 ************************************ 00:07:43.511 11:59:50 -- common/autotest_common.sh@1104 -- # nbd_function_test /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:07:43.511 11:59:50 -- bdev/blockdev.sh@298 -- # uname -s 00:07:43.511 11:59:50 -- bdev/blockdev.sh@298 -- # [[ Linux == Linux ]] 00:07:43.511 11:59:50 -- bdev/blockdev.sh@300 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:43.511 11:59:50 -- bdev/blockdev.sh@301 -- # local conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:07:43.511 11:59:50 -- bdev/blockdev.sh@302 -- # bdev_all=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:43.511 11:59:50 -- bdev/blockdev.sh@302 -- # local bdev_all 00:07:43.511 11:59:50 -- bdev/blockdev.sh@303 -- # local bdev_num=16 00:07:43.511 11:59:50 -- bdev/blockdev.sh@307 -- # [[ -e /sys/module/nbd ]] 00:07:43.511 11:59:50 -- bdev/blockdev.sh@309 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:43.511 11:59:50 -- bdev/blockdev.sh@309 -- # local nbd_all 00:07:43.511 11:59:50 -- bdev/blockdev.sh@310 -- # bdev_num=16 00:07:43.511 11:59:50 -- bdev/blockdev.sh@312 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:43.511 11:59:50 -- bdev/blockdev.sh@312 -- # local nbd_list 00:07:43.511 11:59:50 -- bdev/blockdev.sh@313 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:43.511 11:59:50 -- bdev/blockdev.sh@313 -- # local bdev_list 00:07:43.511 11:59:50 -- bdev/blockdev.sh@316 -- # nbd_pid=1199869 00:07:43.511 11:59:50 -- bdev/blockdev.sh@317 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:07:43.511 11:59:50 -- bdev/blockdev.sh@315 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json '' 00:07:43.511 11:59:50 -- bdev/blockdev.sh@318 -- # waitforlisten 1199869 /var/tmp/spdk-nbd.sock 00:07:43.511 11:59:50 -- common/autotest_common.sh@819 -- # '[' -z 1199869 ']' 00:07:43.511 11:59:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:43.511 11:59:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:43.511 11:59:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:43.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:43.511 11:59:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:43.511 11:59:50 -- common/autotest_common.sh@10 -- # set +x 00:07:43.769 [2024-07-25 11:59:50.865739] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:07:43.769 [2024-07-25 11:59:50.865801] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.769 [2024-07-25 11:59:50.955276] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.769 [2024-07-25 11:59:51.038898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.028 [2024-07-25 11:59:51.179151] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.028 [2024-07-25 11:59:51.179199] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:07:44.028 [2024-07-25 11:59:51.179224] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:07:44.028 [2024-07-25 11:59:51.187160] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:44.028 [2024-07-25 11:59:51.187178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:07:44.028 [2024-07-25 11:59:51.195171] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:44.028 [2024-07-25 11:59:51.195187] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:07:44.028 [2024-07-25 11:59:51.262773] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:07:44.028 [2024-07-25 11:59:51.262814] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:44.028 [2024-07-25 11:59:51.262826] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1a17bb0 00:07:44.028 [2024-07-25 11:59:51.262850] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:44.028 [2024-07-25 11:59:51.263858] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:44.028 [2024-07-25 11:59:51.263882] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:07:44.595 11:59:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:44.595 11:59:51 -- common/autotest_common.sh@852 -- # return 0 00:07:44.595 11:59:51 -- bdev/blockdev.sh@320 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@114 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@114 -- # local bdev_list 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@23 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@23 -- # local bdev_list 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@24 -- # local i 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@25 -- # local nbd_device 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:07:44.595 11:59:51 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:44.595 11:59:51 -- common/autotest_common.sh@857 -- # local i 00:07:44.595 11:59:51 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:44.595 11:59:51 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:44.595 11:59:51 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:44.595 11:59:51 -- common/autotest_common.sh@861 -- # break 00:07:44.595 11:59:51 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:44.595 11:59:51 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:44.595 11:59:51 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.595 1+0 records in 00:07:44.595 1+0 records out 00:07:44.595 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000216942 s, 18.9 MB/s 00:07:44.595 11:59:51 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.595 11:59:51 -- common/autotest_common.sh@874 -- # size=4096 00:07:44.595 11:59:51 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.595 11:59:51 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:44.595 11:59:51 -- common/autotest_common.sh@877 -- # return 0 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:44.595 11:59:51 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:07:44.853 11:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:44.853 11:59:52 -- common/autotest_common.sh@857 -- # local i 00:07:44.853 11:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:44.853 11:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:44.853 11:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:44.853 11:59:52 -- common/autotest_common.sh@861 -- # break 00:07:44.853 11:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:44.853 11:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:44.853 11:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:44.853 1+0 records in 00:07:44.853 1+0 records out 00:07:44.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242729 s, 16.9 MB/s 00:07:44.853 11:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.853 11:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:44.853 11:59:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:44.853 11:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:44.853 11:59:52 -- common/autotest_common.sh@877 -- # return 0 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:44.853 11:59:52 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:07:45.112 11:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:07:45.112 11:59:52 -- common/autotest_common.sh@857 -- # local i 00:07:45.112 11:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:45.112 11:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:45.112 11:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:07:45.112 11:59:52 -- common/autotest_common.sh@861 -- # break 00:07:45.112 11:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:45.112 11:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:45.112 11:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.112 1+0 records in 00:07:45.112 1+0 records out 00:07:45.112 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000223286 s, 18.3 MB/s 00:07:45.112 11:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.112 11:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:45.112 11:59:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.112 11:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:45.112 11:59:52 -- common/autotest_common.sh@877 -- # return 0 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.112 11:59:52 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:07:45.370 11:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:07:45.370 11:59:52 -- common/autotest_common.sh@857 -- # local i 00:07:45.370 11:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:45.370 11:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:45.370 11:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:07:45.370 11:59:52 -- common/autotest_common.sh@861 -- # break 00:07:45.370 11:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:45.370 11:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:45.370 11:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.370 1+0 records in 00:07:45.370 1+0 records out 00:07:45.370 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278512 s, 14.7 MB/s 00:07:45.370 11:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.370 11:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:45.370 11:59:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.370 11:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:45.370 11:59:52 -- common/autotest_common.sh@877 -- # return 0 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.370 11:59:52 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:07:45.628 11:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:07:45.628 11:59:52 -- common/autotest_common.sh@857 -- # local i 00:07:45.628 11:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:45.628 11:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:45.628 11:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:07:45.628 11:59:52 -- common/autotest_common.sh@861 -- # break 00:07:45.628 11:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:45.628 11:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:45.628 11:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.628 1+0 records in 00:07:45.628 1+0 records out 00:07:45.628 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000334731 s, 12.2 MB/s 00:07:45.628 11:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.628 11:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:45.628 11:59:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.628 11:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:45.628 11:59:52 -- common/autotest_common.sh@877 -- # return 0 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.628 11:59:52 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:07:45.887 11:59:52 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:07:45.887 11:59:52 -- common/autotest_common.sh@857 -- # local i 00:07:45.887 11:59:52 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:45.887 11:59:52 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:45.887 11:59:52 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:07:45.887 11:59:52 -- common/autotest_common.sh@861 -- # break 00:07:45.887 11:59:52 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:45.887 11:59:52 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:45.887 11:59:52 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.887 1+0 records in 00:07:45.887 1+0 records out 00:07:45.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000378878 s, 10.8 MB/s 00:07:45.887 11:59:52 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.887 11:59:52 -- common/autotest_common.sh@874 -- # size=4096 00:07:45.887 11:59:52 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.887 11:59:52 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:45.887 11:59:52 -- common/autotest_common.sh@877 -- # return 0 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:45.887 11:59:52 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 00:07:45.887 11:59:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:07:45.887 11:59:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:07:45.887 11:59:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:07:45.887 11:59:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:07:45.887 11:59:53 -- common/autotest_common.sh@857 -- # local i 00:07:45.887 11:59:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:45.887 11:59:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:45.887 11:59:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:07:45.887 11:59:53 -- common/autotest_common.sh@861 -- # break 00:07:45.887 11:59:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:45.887 11:59:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:45.887 11:59:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:45.887 1+0 records in 00:07:45.887 1+0 records out 00:07:45.887 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000397121 s, 10.3 MB/s 00:07:45.887 11:59:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.887 11:59:53 -- common/autotest_common.sh@874 -- # size=4096 00:07:45.887 11:59:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:45.887 11:59:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:45.887 11:59:53 -- common/autotest_common.sh@877 -- # return 0 00:07:45.887 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:45.887 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd7 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd7 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd7 00:07:46.146 11:59:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:07:46.146 11:59:53 -- common/autotest_common.sh@857 -- # local i 00:07:46.146 11:59:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:46.146 11:59:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:46.146 11:59:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:07:46.146 11:59:53 -- common/autotest_common.sh@861 -- # break 00:07:46.146 11:59:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:46.146 11:59:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:46.146 11:59:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.146 1+0 records in 00:07:46.146 1+0 records out 00:07:46.146 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000341506 s, 12.0 MB/s 00:07:46.146 11:59:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.146 11:59:53 -- common/autotest_common.sh@874 -- # size=4096 00:07:46.146 11:59:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.146 11:59:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:46.146 11:59:53 -- common/autotest_common.sh@877 -- # return 0 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.146 11:59:53 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd8 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd8 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd8 00:07:46.404 11:59:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:07:46.404 11:59:53 -- common/autotest_common.sh@857 -- # local i 00:07:46.404 11:59:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:46.404 11:59:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:46.404 11:59:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:07:46.404 11:59:53 -- common/autotest_common.sh@861 -- # break 00:07:46.404 11:59:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:46.404 11:59:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:46.404 11:59:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.404 1+0 records in 00:07:46.404 1+0 records out 00:07:46.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000394477 s, 10.4 MB/s 00:07:46.404 11:59:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.404 11:59:53 -- common/autotest_common.sh@874 -- # size=4096 00:07:46.404 11:59:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.404 11:59:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:46.404 11:59:53 -- common/autotest_common.sh@877 -- # return 0 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.404 11:59:53 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd9 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd9 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd9 00:07:46.662 11:59:53 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:07:46.662 11:59:53 -- common/autotest_common.sh@857 -- # local i 00:07:46.662 11:59:53 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:46.662 11:59:53 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:46.662 11:59:53 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:07:46.662 11:59:53 -- common/autotest_common.sh@861 -- # break 00:07:46.662 11:59:53 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:46.662 11:59:53 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:46.662 11:59:53 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.662 1+0 records in 00:07:46.662 1+0 records out 00:07:46.662 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000444012 s, 9.2 MB/s 00:07:46.662 11:59:53 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.662 11:59:53 -- common/autotest_common.sh@874 -- # size=4096 00:07:46.662 11:59:53 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.662 11:59:53 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:46.662 11:59:53 -- common/autotest_common.sh@877 -- # return 0 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.662 11:59:53 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd10 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd10 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd10 00:07:46.921 11:59:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:07:46.921 11:59:54 -- common/autotest_common.sh@857 -- # local i 00:07:46.921 11:59:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:07:46.921 11:59:54 -- common/autotest_common.sh@861 -- # break 00:07:46.921 11:59:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.921 1+0 records in 00:07:46.921 1+0 records out 00:07:46.921 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375886 s, 10.9 MB/s 00:07:46.921 11:59:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.921 11:59:54 -- common/autotest_common.sh@874 -- # size=4096 00:07:46.921 11:59:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:46.921 11:59:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:46.921 11:59:54 -- common/autotest_common.sh@877 -- # return 0 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd11 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd11 00:07:46.921 11:59:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd11 00:07:46.921 11:59:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:07:46.921 11:59:54 -- common/autotest_common.sh@857 -- # local i 00:07:46.921 11:59:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:07:46.921 11:59:54 -- common/autotest_common.sh@861 -- # break 00:07:46.921 11:59:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:46.921 11:59:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:46.921 1+0 records in 00:07:46.921 1+0 records out 00:07:47.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545612 s, 7.5 MB/s 00:07:47.179 11:59:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.179 11:59:54 -- common/autotest_common.sh@874 -- # size=4096 00:07:47.179 11:59:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.179 11:59:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:47.179 11:59:54 -- common/autotest_common.sh@877 -- # return 0 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd12 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd12 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd12 00:07:47.179 11:59:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:07:47.179 11:59:54 -- common/autotest_common.sh@857 -- # local i 00:07:47.179 11:59:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:47.179 11:59:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:47.179 11:59:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:07:47.179 11:59:54 -- common/autotest_common.sh@861 -- # break 00:07:47.179 11:59:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:47.179 11:59:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:47.179 11:59:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.179 1+0 records in 00:07:47.179 1+0 records out 00:07:47.179 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000487966 s, 8.4 MB/s 00:07:47.179 11:59:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.179 11:59:54 -- common/autotest_common.sh@874 -- # size=4096 00:07:47.179 11:59:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.179 11:59:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:47.179 11:59:54 -- common/autotest_common.sh@877 -- # return 0 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.179 11:59:54 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd13 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd13 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd13 00:07:47.437 11:59:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:07:47.437 11:59:54 -- common/autotest_common.sh@857 -- # local i 00:07:47.437 11:59:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:47.437 11:59:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:47.437 11:59:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:07:47.437 11:59:54 -- common/autotest_common.sh@861 -- # break 00:07:47.437 11:59:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:47.437 11:59:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:47.437 11:59:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.437 1+0 records in 00:07:47.437 1+0 records out 00:07:47.437 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000520743 s, 7.9 MB/s 00:07:47.437 11:59:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.437 11:59:54 -- common/autotest_common.sh@874 -- # size=4096 00:07:47.437 11:59:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.437 11:59:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:47.437 11:59:54 -- common/autotest_common.sh@877 -- # return 0 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.437 11:59:54 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd14 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd14 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd14 00:07:47.695 11:59:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:07:47.695 11:59:54 -- common/autotest_common.sh@857 -- # local i 00:07:47.695 11:59:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:47.695 11:59:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:47.695 11:59:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:07:47.695 11:59:54 -- common/autotest_common.sh@861 -- # break 00:07:47.695 11:59:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:47.695 11:59:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:47.695 11:59:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.695 1+0 records in 00:07:47.695 1+0 records out 00:07:47.695 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000602238 s, 6.8 MB/s 00:07:47.695 11:59:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.695 11:59:54 -- common/autotest_common.sh@874 -- # size=4096 00:07:47.695 11:59:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.695 11:59:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:47.695 11:59:54 -- common/autotest_common.sh@877 -- # return 0 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.695 11:59:54 -- bdev/nbd_common.sh@28 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd15 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@30 -- # basename /dev/nbd15 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@30 -- # waitfornbd nbd15 00:07:47.953 11:59:55 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:07:47.953 11:59:55 -- common/autotest_common.sh@857 -- # local i 00:07:47.953 11:59:55 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:47.953 11:59:55 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:47.953 11:59:55 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:07:47.953 11:59:55 -- common/autotest_common.sh@861 -- # break 00:07:47.953 11:59:55 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:47.953 11:59:55 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:47.953 11:59:55 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:47.953 1+0 records in 00:07:47.953 1+0 records out 00:07:47.953 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000545214 s, 7.5 MB/s 00:07:47.953 11:59:55 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.953 11:59:55 -- common/autotest_common.sh@874 -- # size=4096 00:07:47.953 11:59:55 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:47.953 11:59:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:47.953 11:59:55 -- common/autotest_common.sh@877 -- # return 0 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@27 -- # (( i < 16 )) 00:07:47.953 11:59:55 -- bdev/nbd_common.sh@118 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd0", 00:07:48.212 "bdev_name": "Malloc0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd1", 00:07:48.212 "bdev_name": "Malloc1p0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd2", 00:07:48.212 "bdev_name": "Malloc1p1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd3", 00:07:48.212 "bdev_name": "Malloc2p0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd4", 00:07:48.212 "bdev_name": "Malloc2p1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd5", 00:07:48.212 "bdev_name": "Malloc2p2" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd6", 00:07:48.212 "bdev_name": "Malloc2p3" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd7", 00:07:48.212 "bdev_name": "Malloc2p4" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd8", 00:07:48.212 "bdev_name": "Malloc2p5" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd9", 00:07:48.212 "bdev_name": "Malloc2p6" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd10", 00:07:48.212 "bdev_name": "Malloc2p7" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd11", 00:07:48.212 "bdev_name": "TestPT" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd12", 00:07:48.212 "bdev_name": "raid0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd13", 00:07:48.212 "bdev_name": "concat0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd14", 00:07:48.212 "bdev_name": "raid1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd15", 00:07:48.212 "bdev_name": "AIO0" 00:07:48.212 } 00:07:48.212 ]' 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@119 -- # echo '[ 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd0", 00:07:48.212 "bdev_name": "Malloc0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd1", 00:07:48.212 "bdev_name": "Malloc1p0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd2", 00:07:48.212 "bdev_name": "Malloc1p1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd3", 00:07:48.212 "bdev_name": "Malloc2p0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd4", 00:07:48.212 "bdev_name": "Malloc2p1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd5", 00:07:48.212 "bdev_name": "Malloc2p2" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd6", 00:07:48.212 "bdev_name": "Malloc2p3" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd7", 00:07:48.212 "bdev_name": "Malloc2p4" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd8", 00:07:48.212 "bdev_name": "Malloc2p5" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd9", 00:07:48.212 "bdev_name": "Malloc2p6" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd10", 00:07:48.212 "bdev_name": "Malloc2p7" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd11", 00:07:48.212 "bdev_name": "TestPT" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd12", 00:07:48.212 "bdev_name": "raid0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd13", 00:07:48.212 "bdev_name": "concat0" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd14", 00:07:48.212 "bdev_name": "raid1" 00:07:48.212 }, 00:07:48.212 { 00:07:48.212 "nbd_device": "/dev/nbd15", 00:07:48.212 "bdev_name": "AIO0" 00:07:48.212 } 00:07:48.212 ]' 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15' 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15') 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@51 -- # local i 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@41 -- # break 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.212 11:59:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@41 -- # break 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.471 11:59:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@41 -- # break 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.729 11:59:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.729 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:48.987 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.246 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.505 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@41 -- # break 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@45 -- # return 0 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:49.763 11:59:56 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.021 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.022 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:50.022 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.022 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.022 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.022 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.280 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.539 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@41 -- # break 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@45 -- # return 0 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:50.797 11:59:57 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@41 -- # break 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@45 -- # return 0 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:51.056 11:59:58 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@65 -- # true 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@65 -- # count=0 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@122 -- # count=0 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@127 -- # return 0 00:07:51.314 11:59:58 -- bdev/blockdev.sh@321 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:51.314 11:59:58 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1p0' 'Malloc1p1' 'Malloc2p0' 'Malloc2p1' 'Malloc2p2' 'Malloc2p3' 'Malloc2p4' 'Malloc2p5' 'Malloc2p6' 'Malloc2p7' 'TestPT' 'raid0' 'concat0' 'raid1' 'AIO0') 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@12 -- # local i 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:51.315 /dev/nbd0 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:51.315 11:59:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:07:51.315 11:59:58 -- common/autotest_common.sh@857 -- # local i 00:07:51.315 11:59:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:51.315 11:59:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:51.315 11:59:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:07:51.315 11:59:58 -- common/autotest_common.sh@861 -- # break 00:07:51.315 11:59:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:51.315 11:59:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:51.315 11:59:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.315 1+0 records in 00:07:51.315 1+0 records out 00:07:51.315 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000261375 s, 15.7 MB/s 00:07:51.315 11:59:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.315 11:59:58 -- common/autotest_common.sh@874 -- # size=4096 00:07:51.315 11:59:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.315 11:59:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:51.315 11:59:58 -- common/autotest_common.sh@877 -- # return 0 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.315 11:59:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p0 /dev/nbd1 00:07:51.574 /dev/nbd1 00:07:51.574 11:59:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:51.574 11:59:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:51.574 11:59:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:07:51.574 11:59:58 -- common/autotest_common.sh@857 -- # local i 00:07:51.574 11:59:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:51.574 11:59:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:51.574 11:59:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:07:51.574 11:59:58 -- common/autotest_common.sh@861 -- # break 00:07:51.574 11:59:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:51.574 11:59:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:51.574 11:59:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.574 1+0 records in 00:07:51.574 1+0 records out 00:07:51.574 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276992 s, 14.8 MB/s 00:07:51.574 11:59:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.574 11:59:58 -- common/autotest_common.sh@874 -- # size=4096 00:07:51.574 11:59:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.574 11:59:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:51.574 11:59:58 -- common/autotest_common.sh@877 -- # return 0 00:07:51.574 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.574 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.574 11:59:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1p1 /dev/nbd10 00:07:51.833 /dev/nbd10 00:07:51.833 11:59:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:07:51.833 11:59:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:07:51.833 11:59:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd10 00:07:51.833 11:59:58 -- common/autotest_common.sh@857 -- # local i 00:07:51.833 11:59:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:51.833 11:59:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:51.833 11:59:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd10 /proc/partitions 00:07:51.833 11:59:58 -- common/autotest_common.sh@861 -- # break 00:07:51.833 11:59:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:51.833 11:59:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:51.833 11:59:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd10 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:51.833 1+0 records in 00:07:51.833 1+0 records out 00:07:51.833 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000253872 s, 16.1 MB/s 00:07:51.833 11:59:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.833 11:59:58 -- common/autotest_common.sh@874 -- # size=4096 00:07:51.833 11:59:58 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:51.833 11:59:58 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:51.833 11:59:58 -- common/autotest_common.sh@877 -- # return 0 00:07:51.833 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:51.833 11:59:58 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:51.834 11:59:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p0 /dev/nbd11 00:07:52.093 /dev/nbd11 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:07:52.093 11:59:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd11 00:07:52.093 11:59:59 -- common/autotest_common.sh@857 -- # local i 00:07:52.093 11:59:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd11 /proc/partitions 00:07:52.093 11:59:59 -- common/autotest_common.sh@861 -- # break 00:07:52.093 11:59:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd11 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.093 1+0 records in 00:07:52.093 1+0 records out 00:07:52.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277074 s, 14.8 MB/s 00:07:52.093 11:59:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.093 11:59:59 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.093 11:59:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.093 11:59:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.093 11:59:59 -- common/autotest_common.sh@877 -- # return 0 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p1 /dev/nbd12 00:07:52.093 /dev/nbd12 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:07:52.093 11:59:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd12 00:07:52.093 11:59:59 -- common/autotest_common.sh@857 -- # local i 00:07:52.093 11:59:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd12 /proc/partitions 00:07:52.093 11:59:59 -- common/autotest_common.sh@861 -- # break 00:07:52.093 11:59:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.093 11:59:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd12 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.093 1+0 records in 00:07:52.093 1+0 records out 00:07:52.093 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324546 s, 12.6 MB/s 00:07:52.093 11:59:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.093 11:59:59 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.093 11:59:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.093 11:59:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.093 11:59:59 -- common/autotest_common.sh@877 -- # return 0 00:07:52.093 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p2 /dev/nbd13 00:07:52.394 /dev/nbd13 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:07:52.394 11:59:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd13 00:07:52.394 11:59:59 -- common/autotest_common.sh@857 -- # local i 00:07:52.394 11:59:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.394 11:59:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.394 11:59:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd13 /proc/partitions 00:07:52.394 11:59:59 -- common/autotest_common.sh@861 -- # break 00:07:52.394 11:59:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.394 11:59:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.394 11:59:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd13 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.394 1+0 records in 00:07:52.394 1+0 records out 00:07:52.394 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000377204 s, 10.9 MB/s 00:07:52.394 11:59:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.394 11:59:59 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.394 11:59:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.394 11:59:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.394 11:59:59 -- common/autotest_common.sh@877 -- # return 0 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.394 11:59:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p3 /dev/nbd14 00:07:52.663 /dev/nbd14 00:07:52.663 11:59:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:07:52.663 11:59:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:07:52.663 11:59:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd14 00:07:52.663 11:59:59 -- common/autotest_common.sh@857 -- # local i 00:07:52.663 11:59:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.663 11:59:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.663 11:59:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd14 /proc/partitions 00:07:52.663 11:59:59 -- common/autotest_common.sh@861 -- # break 00:07:52.663 11:59:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.663 11:59:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.663 11:59:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd14 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.663 1+0 records in 00:07:52.663 1+0 records out 00:07:52.663 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000330163 s, 12.4 MB/s 00:07:52.663 11:59:59 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.663 11:59:59 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.663 11:59:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.663 11:59:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.663 11:59:59 -- common/autotest_common.sh@877 -- # return 0 00:07:52.663 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.663 11:59:59 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.663 11:59:59 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p4 /dev/nbd15 00:07:52.923 /dev/nbd15 00:07:52.923 11:59:59 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd15 00:07:52.923 11:59:59 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd15 00:07:52.923 11:59:59 -- common/autotest_common.sh@856 -- # local nbd_name=nbd15 00:07:52.923 11:59:59 -- common/autotest_common.sh@857 -- # local i 00:07:52.923 11:59:59 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.923 11:59:59 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.923 11:59:59 -- common/autotest_common.sh@860 -- # grep -q -w nbd15 /proc/partitions 00:07:52.923 11:59:59 -- common/autotest_common.sh@861 -- # break 00:07:52.923 11:59:59 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.923 11:59:59 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.923 11:59:59 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd15 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.923 1+0 records in 00:07:52.923 1+0 records out 00:07:52.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000449741 s, 9.1 MB/s 00:07:52.923 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.923 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.923 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.923 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.923 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p5 /dev/nbd2 00:07:52.923 /dev/nbd2 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd2 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd2 00:07:52.923 12:00:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd2 00:07:52.923 12:00:00 -- common/autotest_common.sh@857 -- # local i 00:07:52.923 12:00:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:52.923 12:00:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:52.923 12:00:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd2 /proc/partitions 00:07:52.923 12:00:00 -- common/autotest_common.sh@861 -- # break 00:07:52.923 12:00:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:52.923 12:00:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:52.923 12:00:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd2 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:52.923 1+0 records in 00:07:52.923 1+0 records out 00:07:52.923 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000434094 s, 9.4 MB/s 00:07:52.923 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.923 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:52.923 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:52.923 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:52.923 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:52.923 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:52.924 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p6 /dev/nbd3 00:07:53.183 /dev/nbd3 00:07:53.183 12:00:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd3 00:07:53.183 12:00:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd3 00:07:53.183 12:00:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd3 00:07:53.183 12:00:00 -- common/autotest_common.sh@857 -- # local i 00:07:53.183 12:00:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.183 12:00:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.183 12:00:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd3 /proc/partitions 00:07:53.183 12:00:00 -- common/autotest_common.sh@861 -- # break 00:07:53.183 12:00:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.183 12:00:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.183 12:00:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd3 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.183 1+0 records in 00:07:53.183 1+0 records out 00:07:53.183 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447949 s, 9.1 MB/s 00:07:53.183 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.183 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.183 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.183 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.183 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:07:53.183 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.183 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.183 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc2p7 /dev/nbd4 00:07:53.442 /dev/nbd4 00:07:53.442 12:00:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd4 00:07:53.442 12:00:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd4 00:07:53.442 12:00:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd4 00:07:53.442 12:00:00 -- common/autotest_common.sh@857 -- # local i 00:07:53.442 12:00:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.442 12:00:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.442 12:00:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd4 /proc/partitions 00:07:53.442 12:00:00 -- common/autotest_common.sh@861 -- # break 00:07:53.442 12:00:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.442 12:00:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.442 12:00:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd4 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.442 1+0 records in 00:07:53.442 1+0 records out 00:07:53.442 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00040227 s, 10.2 MB/s 00:07:53.442 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.442 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.442 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.442 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.442 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:07:53.442 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.442 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.442 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk TestPT /dev/nbd5 00:07:53.702 /dev/nbd5 00:07:53.702 12:00:00 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd5 00:07:53.702 12:00:00 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd5 00:07:53.702 12:00:00 -- common/autotest_common.sh@856 -- # local nbd_name=nbd5 00:07:53.702 12:00:00 -- common/autotest_common.sh@857 -- # local i 00:07:53.702 12:00:00 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.702 12:00:00 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.702 12:00:00 -- common/autotest_common.sh@860 -- # grep -q -w nbd5 /proc/partitions 00:07:53.702 12:00:00 -- common/autotest_common.sh@861 -- # break 00:07:53.702 12:00:00 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.702 12:00:00 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.702 12:00:00 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd5 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.702 1+0 records in 00:07:53.702 1+0 records out 00:07:53.702 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000553557 s, 7.4 MB/s 00:07:53.702 12:00:00 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.702 12:00:00 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.702 12:00:00 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.702 12:00:00 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.702 12:00:00 -- common/autotest_common.sh@877 -- # return 0 00:07:53.702 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.702 12:00:00 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.702 12:00:00 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid0 /dev/nbd6 00:07:53.962 /dev/nbd6 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd6 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd6 00:07:53.962 12:00:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd6 00:07:53.962 12:00:01 -- common/autotest_common.sh@857 -- # local i 00:07:53.962 12:00:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd6 /proc/partitions 00:07:53.962 12:00:01 -- common/autotest_common.sh@861 -- # break 00:07:53.962 12:00:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd6 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.962 1+0 records in 00:07:53.962 1+0 records out 00:07:53.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000524007 s, 7.8 MB/s 00:07:53.962 12:00:01 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.962 12:00:01 -- common/autotest_common.sh@874 -- # size=4096 00:07:53.962 12:00:01 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:53.962 12:00:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:53.962 12:00:01 -- common/autotest_common.sh@877 -- # return 0 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk concat0 /dev/nbd7 00:07:53.962 /dev/nbd7 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd7 00:07:53.962 12:00:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd7 00:07:53.962 12:00:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd7 00:07:53.962 12:00:01 -- common/autotest_common.sh@857 -- # local i 00:07:53.962 12:00:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd7 /proc/partitions 00:07:53.962 12:00:01 -- common/autotest_common.sh@861 -- # break 00:07:53.962 12:00:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:53.962 12:00:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd7 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:53.962 1+0 records in 00:07:53.962 1+0 records out 00:07:53.962 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000512203 s, 8.0 MB/s 00:07:54.220 12:00:01 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.220 12:00:01 -- common/autotest_common.sh@874 -- # size=4096 00:07:54.220 12:00:01 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.220 12:00:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:54.220 12:00:01 -- common/autotest_common.sh@877 -- # return 0 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid1 /dev/nbd8 00:07:54.220 /dev/nbd8 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd8 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd8 00:07:54.220 12:00:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd8 00:07:54.220 12:00:01 -- common/autotest_common.sh@857 -- # local i 00:07:54.220 12:00:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:54.220 12:00:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:54.220 12:00:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd8 /proc/partitions 00:07:54.220 12:00:01 -- common/autotest_common.sh@861 -- # break 00:07:54.220 12:00:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:54.220 12:00:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:54.220 12:00:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd8 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.220 1+0 records in 00:07:54.220 1+0 records out 00:07:54.220 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000557099 s, 7.4 MB/s 00:07:54.220 12:00:01 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.220 12:00:01 -- common/autotest_common.sh@874 -- # size=4096 00:07:54.220 12:00:01 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.220 12:00:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:54.220 12:00:01 -- common/autotest_common.sh@877 -- # return 0 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.220 12:00:01 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk AIO0 /dev/nbd9 00:07:54.476 /dev/nbd9 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd9 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd9 00:07:54.476 12:00:01 -- common/autotest_common.sh@856 -- # local nbd_name=nbd9 00:07:54.476 12:00:01 -- common/autotest_common.sh@857 -- # local i 00:07:54.476 12:00:01 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:07:54.476 12:00:01 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:07:54.476 12:00:01 -- common/autotest_common.sh@860 -- # grep -q -w nbd9 /proc/partitions 00:07:54.476 12:00:01 -- common/autotest_common.sh@861 -- # break 00:07:54.476 12:00:01 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:07:54.476 12:00:01 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:07:54.476 12:00:01 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd9 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:54.476 1+0 records in 00:07:54.476 1+0 records out 00:07:54.476 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000642677 s, 6.4 MB/s 00:07:54.476 12:00:01 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.476 12:00:01 -- common/autotest_common.sh@874 -- # size=4096 00:07:54.476 12:00:01 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:07:54.476 12:00:01 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:07:54.476 12:00:01 -- common/autotest_common.sh@877 -- # return 0 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@14 -- # (( i < 16 )) 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:54.476 12:00:01 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:54.732 12:00:01 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd0", 00:07:54.732 "bdev_name": "Malloc0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd1", 00:07:54.732 "bdev_name": "Malloc1p0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd10", 00:07:54.732 "bdev_name": "Malloc1p1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd11", 00:07:54.732 "bdev_name": "Malloc2p0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd12", 00:07:54.732 "bdev_name": "Malloc2p1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd13", 00:07:54.732 "bdev_name": "Malloc2p2" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd14", 00:07:54.732 "bdev_name": "Malloc2p3" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd15", 00:07:54.732 "bdev_name": "Malloc2p4" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd2", 00:07:54.732 "bdev_name": "Malloc2p5" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd3", 00:07:54.732 "bdev_name": "Malloc2p6" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd4", 00:07:54.732 "bdev_name": "Malloc2p7" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd5", 00:07:54.732 "bdev_name": "TestPT" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd6", 00:07:54.732 "bdev_name": "raid0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd7", 00:07:54.732 "bdev_name": "concat0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd8", 00:07:54.732 "bdev_name": "raid1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd9", 00:07:54.732 "bdev_name": "AIO0" 00:07:54.732 } 00:07:54.732 ]' 00:07:54.732 12:00:01 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:54.732 12:00:01 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd0", 00:07:54.732 "bdev_name": "Malloc0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd1", 00:07:54.732 "bdev_name": "Malloc1p0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd10", 00:07:54.732 "bdev_name": "Malloc1p1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd11", 00:07:54.732 "bdev_name": "Malloc2p0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd12", 00:07:54.732 "bdev_name": "Malloc2p1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd13", 00:07:54.732 "bdev_name": "Malloc2p2" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd14", 00:07:54.732 "bdev_name": "Malloc2p3" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd15", 00:07:54.732 "bdev_name": "Malloc2p4" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd2", 00:07:54.732 "bdev_name": "Malloc2p5" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd3", 00:07:54.732 "bdev_name": "Malloc2p6" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd4", 00:07:54.732 "bdev_name": "Malloc2p7" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd5", 00:07:54.732 "bdev_name": "TestPT" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd6", 00:07:54.732 "bdev_name": "raid0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd7", 00:07:54.732 "bdev_name": "concat0" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd8", 00:07:54.732 "bdev_name": "raid1" 00:07:54.732 }, 00:07:54.732 { 00:07:54.732 "nbd_device": "/dev/nbd9", 00:07:54.732 "bdev_name": "AIO0" 00:07:54.732 } 00:07:54.732 ]' 00:07:54.732 12:00:01 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:54.732 /dev/nbd1 00:07:54.733 /dev/nbd10 00:07:54.733 /dev/nbd11 00:07:54.733 /dev/nbd12 00:07:54.733 /dev/nbd13 00:07:54.733 /dev/nbd14 00:07:54.733 /dev/nbd15 00:07:54.733 /dev/nbd2 00:07:54.733 /dev/nbd3 00:07:54.733 /dev/nbd4 00:07:54.733 /dev/nbd5 00:07:54.733 /dev/nbd6 00:07:54.733 /dev/nbd7 00:07:54.733 /dev/nbd8 00:07:54.733 /dev/nbd9' 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:54.733 /dev/nbd1 00:07:54.733 /dev/nbd10 00:07:54.733 /dev/nbd11 00:07:54.733 /dev/nbd12 00:07:54.733 /dev/nbd13 00:07:54.733 /dev/nbd14 00:07:54.733 /dev/nbd15 00:07:54.733 /dev/nbd2 00:07:54.733 /dev/nbd3 00:07:54.733 /dev/nbd4 00:07:54.733 /dev/nbd5 00:07:54.733 /dev/nbd6 00:07:54.733 /dev/nbd7 00:07:54.733 /dev/nbd8 00:07:54.733 /dev/nbd9' 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@65 -- # count=16 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@66 -- # echo 16 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@95 -- # count=16 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@96 -- # '[' 16 -ne 16 ']' 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' write 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:07:54.733 256+0 records in 00:07:54.733 256+0 records out 00:07:54.733 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.01151 s, 91.1 MB/s 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.733 12:00:01 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:54.988 256+0 records in 00:07:54.988 256+0 records out 00:07:54.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.117186 s, 8.9 MB/s 00:07:54.988 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.988 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:54.988 256+0 records in 00:07:54.988 256+0 records out 00:07:54.988 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121658 s, 8.6 MB/s 00:07:54.988 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:54.988 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:07:55.246 256+0 records in 00:07:55.246 256+0 records out 00:07:55.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12199 s, 8.6 MB/s 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:07:55.246 256+0 records in 00:07:55.246 256+0 records out 00:07:55.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.118865 s, 8.8 MB/s 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:07:55.246 256+0 records in 00:07:55.246 256+0 records out 00:07:55.246 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.120485 s, 8.7 MB/s 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.246 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:07:55.504 256+0 records in 00:07:55.504 256+0 records out 00:07:55.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122508 s, 8.6 MB/s 00:07:55.504 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.504 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:07:55.504 256+0 records in 00:07:55.504 256+0 records out 00:07:55.504 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121541 s, 8.6 MB/s 00:07:55.504 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.504 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd15 bs=4096 count=256 oflag=direct 00:07:55.762 256+0 records in 00:07:55.762 256+0 records out 00:07:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121078 s, 8.7 MB/s 00:07:55.762 12:00:02 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.762 12:00:02 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd2 bs=4096 count=256 oflag=direct 00:07:55.762 256+0 records in 00:07:55.762 256+0 records out 00:07:55.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121072 s, 8.7 MB/s 00:07:55.762 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:55.762 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd3 bs=4096 count=256 oflag=direct 00:07:56.021 256+0 records in 00:07:56.021 256+0 records out 00:07:56.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121691 s, 8.6 MB/s 00:07:56.021 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.021 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd4 bs=4096 count=256 oflag=direct 00:07:56.021 256+0 records in 00:07:56.021 256+0 records out 00:07:56.021 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.121924 s, 8.6 MB/s 00:07:56.021 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.021 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd5 bs=4096 count=256 oflag=direct 00:07:56.278 256+0 records in 00:07:56.278 256+0 records out 00:07:56.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12201 s, 8.6 MB/s 00:07:56.278 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.278 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd6 bs=4096 count=256 oflag=direct 00:07:56.278 256+0 records in 00:07:56.278 256+0 records out 00:07:56.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122227 s, 8.6 MB/s 00:07:56.278 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.278 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd7 bs=4096 count=256 oflag=direct 00:07:56.535 256+0 records in 00:07:56.535 256+0 records out 00:07:56.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.122871 s, 8.5 MB/s 00:07:56.535 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.535 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd8 bs=4096 count=256 oflag=direct 00:07:56.535 256+0 records in 00:07:56.535 256+0 records out 00:07:56.535 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.12407 s, 8.5 MB/s 00:07:56.535 12:00:03 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:56.535 12:00:03 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest of=/dev/nbd9 bs=4096 count=256 oflag=direct 00:07:56.793 256+0 records in 00:07:56.793 256+0 records out 00:07:56.793 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.126864 s, 8.3 MB/s 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' verify 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd0 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd1 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd10 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd11 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd12 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd13 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd14 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd15 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:03 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd2 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd3 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd4 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd5 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd6 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd7 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd8 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest /dev/nbd9 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdrandtest 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@51 -- # local i 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:56.793 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@41 -- # break 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.051 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@41 -- # break 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:07:57.308 12:00:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@41 -- # break 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@41 -- # break 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.566 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@41 -- # break 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@45 -- # return 0 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:57.824 12:00:04 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@41 -- # break 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:07:58.082 12:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@41 -- # break 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd15 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd15 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd15 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd15 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd15 /proc/partitions 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@41 -- # break 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.341 12:00:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@41 -- # break 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.599 12:00:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@41 -- # break 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.858 12:00:05 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:58.858 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.116 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.375 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd7 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd7 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd7 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd7 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd7 /proc/partitions 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd8 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd8 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd8 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd8 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd8 /proc/partitions 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@41 -- # break 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:59.633 12:00:06 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd9 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd9 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd9 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd9 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd9 /proc/partitions 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@41 -- # break 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@45 -- # return 0 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:59.892 12:00:07 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@65 -- # true 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@65 -- # count=0 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:00.150 12:00:07 -- bdev/nbd_common.sh@104 -- # count=0 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@109 -- # return 0 00:08:00.151 12:00:07 -- bdev/blockdev.sh@322 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14 /dev/nbd15 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6 /dev/nbd7 /dev/nbd8 /dev/nbd9' 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@132 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@132 -- # local nbd_list 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@133 -- # local mkfs_ret 00:08:00.151 12:00:07 -- bdev/nbd_common.sh@135 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:08:00.409 malloc_lvol_verify 00:08:00.409 12:00:07 -- bdev/nbd_common.sh@136 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:08:00.409 29b4d6ea-ee6f-4a3c-8033-a15f7651a85c 00:08:00.409 12:00:07 -- bdev/nbd_common.sh@137 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:08:00.668 c561ad16-ad67-4248-b287-20ed8600774e 00:08:00.668 12:00:07 -- bdev/nbd_common.sh@138 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:08:00.926 /dev/nbd0 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@140 -- # mkfs.ext4 /dev/nbd0 00:08:00.926 mke2fs 1.46.5 (30-Dec-2021) 00:08:00.926 Discarding device blocks: 0/4096 done 00:08:00.926 Creating filesystem with 4096 1k blocks and 1024 inodes 00:08:00.926 00:08:00.926 Allocating group tables: 0/1 done 00:08:00.926 Writing inode tables: 0/1 done 00:08:00.926 Creating journal (1024 blocks): done 00:08:00.926 Writing superblocks and filesystem accounting information: 0/1 done 00:08:00.926 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@141 -- # mkfs_ret=0 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@51 -- # local i 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:00.926 12:00:08 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@41 -- # break 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@45 -- # return 0 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@143 -- # '[' 0 -ne 0 ']' 00:08:01.185 12:00:08 -- bdev/nbd_common.sh@147 -- # return 0 00:08:01.185 12:00:08 -- bdev/blockdev.sh@324 -- # killprocess 1199869 00:08:01.185 12:00:08 -- common/autotest_common.sh@926 -- # '[' -z 1199869 ']' 00:08:01.185 12:00:08 -- common/autotest_common.sh@930 -- # kill -0 1199869 00:08:01.185 12:00:08 -- common/autotest_common.sh@931 -- # uname 00:08:01.185 12:00:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:01.185 12:00:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1199869 00:08:01.185 12:00:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:08:01.185 12:00:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:08:01.185 12:00:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1199869' 00:08:01.185 killing process with pid 1199869 00:08:01.185 12:00:08 -- common/autotest_common.sh@945 -- # kill 1199869 00:08:01.185 12:00:08 -- common/autotest_common.sh@950 -- # wait 1199869 00:08:01.443 12:00:08 -- bdev/blockdev.sh@325 -- # trap - SIGINT SIGTERM EXIT 00:08:01.443 00:08:01.443 real 0m17.852s 00:08:01.443 user 0m21.067s 00:08:01.443 sys 0m10.517s 00:08:01.443 12:00:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:01.443 12:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.443 ************************************ 00:08:01.443 END TEST bdev_nbd 00:08:01.443 ************************************ 00:08:01.443 12:00:08 -- bdev/blockdev.sh@761 -- # [[ y == y ]] 00:08:01.443 12:00:08 -- bdev/blockdev.sh@762 -- # '[' bdev = nvme ']' 00:08:01.443 12:00:08 -- bdev/blockdev.sh@762 -- # '[' bdev = gpt ']' 00:08:01.443 12:00:08 -- bdev/blockdev.sh@766 -- # run_test bdev_fio fio_test_suite '' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.444 12:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.444 ************************************ 00:08:01.444 START TEST bdev_fio 00:08:01.444 ************************************ 00:08:01.444 12:00:08 -- common/autotest_common.sh@1104 -- # fio_test_suite '' 00:08:01.444 12:00:08 -- bdev/blockdev.sh@329 -- # local env_context 00:08:01.444 12:00:08 -- bdev/blockdev.sh@333 -- # pushd /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:08:01.444 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:01.444 12:00:08 -- bdev/blockdev.sh@334 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:08:01.444 12:00:08 -- bdev/blockdev.sh@337 -- # echo '' 00:08:01.444 12:00:08 -- bdev/blockdev.sh@337 -- # sed s/--env-context=// 00:08:01.444 12:00:08 -- bdev/blockdev.sh@337 -- # env_context= 00:08:01.444 12:00:08 -- bdev/blockdev.sh@338 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio verify AIO '' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1259 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:01.444 12:00:08 -- common/autotest_common.sh@1260 -- # local workload=verify 00:08:01.444 12:00:08 -- common/autotest_common.sh@1261 -- # local bdev_type=AIO 00:08:01.444 12:00:08 -- common/autotest_common.sh@1262 -- # local env_context= 00:08:01.444 12:00:08 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:08:01.444 12:00:08 -- common/autotest_common.sh@1265 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1270 -- # '[' -z verify ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1278 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:01.444 12:00:08 -- common/autotest_common.sh@1280 -- # cat 00:08:01.444 12:00:08 -- common/autotest_common.sh@1292 -- # '[' verify == verify ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1293 -- # cat 00:08:01.444 12:00:08 -- common/autotest_common.sh@1302 -- # '[' AIO == AIO ']' 00:08:01.444 12:00:08 -- common/autotest_common.sh@1303 -- # /usr/src/fio/fio --version 00:08:01.703 12:00:08 -- common/autotest_common.sh@1303 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:08:01.703 12:00:08 -- common/autotest_common.sh@1304 -- # echo serialize_overlap=1 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc1p1]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc1p1 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p1]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p1 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p2]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p2 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p3]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p3 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p4]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p4 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p5]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p5 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p6]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p6 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_Malloc2p7]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=Malloc2p7 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_TestPT]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=TestPT 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_raid0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=raid0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_concat0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=concat0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_raid1]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=raid1 00:08:01.703 12:00:08 -- bdev/blockdev.sh@339 -- # for b in "${bdevs_name[@]}" 00:08:01.703 12:00:08 -- bdev/blockdev.sh@340 -- # echo '[job_AIO0]' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@341 -- # echo filename=AIO0 00:08:01.703 12:00:08 -- bdev/blockdev.sh@345 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json' 00:08:01.703 12:00:08 -- bdev/blockdev.sh@347 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:01.703 12:00:08 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:01.703 12:00:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:01.703 12:00:08 -- common/autotest_common.sh@10 -- # set +x 00:08:01.703 ************************************ 00:08:01.703 START TEST bdev_fio_rw_verify 00:08:01.703 ************************************ 00:08:01.703 12:00:08 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:01.703 12:00:08 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:01.703 12:00:08 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:08:01.703 12:00:08 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:01.703 12:00:08 -- common/autotest_common.sh@1318 -- # local sanitizers 00:08:01.703 12:00:08 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:01.703 12:00:08 -- common/autotest_common.sh@1320 -- # shift 00:08:01.703 12:00:08 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:08:01.703 12:00:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # grep libasan 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:08:01.703 12:00:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:08:01.703 12:00:08 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:08:01.703 12:00:08 -- common/autotest_common.sh@1324 -- # asan_lib= 00:08:01.703 12:00:08 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:08:01.703 12:00:08 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:01.703 12:00:08 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:01.962 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:01.962 fio-3.35 00:08:01.962 Starting 16 threads 00:08:14.185 00:08:14.186 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=1203488: Thu Jul 25 12:00:19 2024 00:08:14.186 read: IOPS=105k, BW=410MiB/s (430MB/s)(4100MiB/10001msec) 00:08:14.186 slat (nsec): min=1920, max=826077, avg=31256.87, stdev=13053.88 00:08:14.186 clat (usec): min=6, max=987, avg=256.81, stdev=119.28 00:08:14.186 lat (usec): min=10, max=1001, avg=288.07, stdev=126.00 00:08:14.186 clat percentiles (usec): 00:08:14.186 | 50.000th=[ 251], 99.000th=[ 506], 99.900th=[ 578], 99.990th=[ 734], 00:08:14.186 | 99.999th=[ 857] 00:08:14.186 write: IOPS=165k, BW=643MiB/s (674MB/s)(6337MiB/9862msec); 0 zone resets 00:08:14.186 slat (usec): min=4, max=1444, avg=41.41, stdev=13.13 00:08:14.186 clat (usec): min=8, max=4160, avg=297.67, stdev=135.86 00:08:14.186 lat (usec): min=28, max=4195, avg=339.08, stdev=142.39 00:08:14.186 clat percentiles (usec): 00:08:14.186 | 50.000th=[ 285], 99.000th=[ 635], 99.900th=[ 832], 99.990th=[ 906], 00:08:14.186 | 99.999th=[ 1663] 00:08:14.186 bw ( KiB/s): min=552632, max=936791, per=98.94%, avg=651028.58, stdev=5585.10, samples=304 00:08:14.186 iops : min=138158, max=234194, avg=162756.95, stdev=1396.24, samples=304 00:08:14.186 lat (usec) : 10=0.01%, 20=0.03%, 50=0.94%, 100=6.27%, 250=37.49% 00:08:14.186 lat (usec) : 500=50.55%, 750=4.48%, 1000=0.24% 00:08:14.186 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01% 00:08:14.186 cpu : usr=99.25%, sys=0.38%, ctx=608, majf=0, minf=2593 00:08:14.186 IO depths : 1=12.4%, 2=24.8%, 4=50.3%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:14.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:14.186 complete : 0=0.0%, 4=89.1%, 8=10.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:14.186 issued rwts: total=1049600,1622299,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:14.186 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:14.186 00:08:14.186 Run status group 0 (all jobs): 00:08:14.186 READ: bw=410MiB/s (430MB/s), 410MiB/s-410MiB/s (430MB/s-430MB/s), io=4100MiB (4299MB), run=10001-10001msec 00:08:14.186 WRITE: bw=643MiB/s (674MB/s), 643MiB/s-643MiB/s (674MB/s-674MB/s), io=6337MiB (6645MB), run=9862-9862msec 00:08:14.186 00:08:14.186 real 0m11.374s 00:08:14.186 user 2m44.597s 00:08:14.186 sys 0m1.460s 00:08:14.186 12:00:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:14.186 12:00:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.186 ************************************ 00:08:14.186 END TEST bdev_fio_rw_verify 00:08:14.186 ************************************ 00:08:14.186 12:00:20 -- bdev/blockdev.sh@348 -- # rm -f 00:08:14.186 12:00:20 -- bdev/blockdev.sh@349 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.186 12:00:20 -- bdev/blockdev.sh@352 -- # fio_config_gen /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio trim '' '' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1259 -- # local config_file=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.186 12:00:20 -- common/autotest_common.sh@1260 -- # local workload=trim 00:08:14.186 12:00:20 -- common/autotest_common.sh@1261 -- # local bdev_type= 00:08:14.186 12:00:20 -- common/autotest_common.sh@1262 -- # local env_context= 00:08:14.186 12:00:20 -- common/autotest_common.sh@1263 -- # local fio_dir=/usr/src/fio 00:08:14.186 12:00:20 -- common/autotest_common.sh@1265 -- # '[' -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio ']' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1270 -- # '[' -z trim ']' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1274 -- # '[' -n '' ']' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1278 -- # touch /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:14.186 12:00:20 -- common/autotest_common.sh@1280 -- # cat 00:08:14.186 12:00:20 -- common/autotest_common.sh@1292 -- # '[' trim == verify ']' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1307 -- # '[' trim == trim ']' 00:08:14.186 12:00:20 -- common/autotest_common.sh@1308 -- # echo rw=trimwrite 00:08:14.186 12:00:20 -- bdev/blockdev.sh@353 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:14.187 12:00:20 -- bdev/blockdev.sh@353 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "bb6df1fd-e622-4cd2-956b-65e624ed6208"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb6df1fd-e622-4cd2-956b-65e624ed6208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9930b069-d32d-5045-9425-1fd88cdf0791"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9930b069-d32d-5045-9425-1fd88cdf0791",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14cd1f67-23b6-52cc-86d5-236d242c489b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14cd1f67-23b6-52cc-86d5-236d242c489b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fde8c720-c24a-5c5f-bfae-808407523f7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fde8c720-c24a-5c5f-bfae-808407523f7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6493a58f-cda7-556d-ad3f-93218b47db12"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6493a58f-cda7-556d-ad3f-93218b47db12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7b75ec33-db35-521f-be99-0960aa483abb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7b75ec33-db35-521f-be99-0960aa483abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "02d12f55-28b3-5ac1-b2c2-4272735a3a65"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "02d12f55-28b3-5ac1-b2c2-4272735a3a65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a46813e8-c647-54ca-a5b1-cb3e8db17eaa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a46813e8-c647-54ca-a5b1-cb3e8db17eaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4243129a-6433-54d9-828b-15134ca43904"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4243129a-6433-54d9-828b-15134ca43904",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9c5706c6-0511-51c8-9cbc-f30f7b9baaca"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9c5706c6-0511-51c8-9cbc-f30f7b9baaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "33d8955f-cf34-40f7-910d-0c7399dc3a00"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8dc5ae78-0c30-442d-bfa7-722af2dd7886",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b7d93b0a-2932-44bc-9c13-f83def8b8034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "587b6824-56dc-4cf5-830c-4a785dcce732",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "83dcc151-a468-48b6-8fda-05491b265630",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ce70b93d-f655-49de-9b59-4d053ef2dfe6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "22a3ea40-eb54-440b-b591-8871597df5a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5f6e0c7a-d3b6-4745-af20-a5acdee3a0b8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "87095e70-a15d-4787-9513-dbfcd77568e7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "87095e70-a15d-4787-9513-dbfcd77568e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:08:14.187 12:00:20 -- bdev/blockdev.sh@353 -- # [[ -n Malloc0 00:08:14.187 Malloc1p0 00:08:14.187 Malloc1p1 00:08:14.187 Malloc2p0 00:08:14.187 Malloc2p1 00:08:14.187 Malloc2p2 00:08:14.187 Malloc2p3 00:08:14.187 Malloc2p4 00:08:14.187 Malloc2p5 00:08:14.187 Malloc2p6 00:08:14.187 Malloc2p7 00:08:14.187 TestPT 00:08:14.187 raid0 00:08:14.187 concat0 ]] 00:08:14.187 12:00:20 -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "bb6df1fd-e622-4cd2-956b-65e624ed6208"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "bb6df1fd-e622-4cd2-956b-65e624ed6208",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "9930b069-d32d-5045-9425-1fd88cdf0791"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "9930b069-d32d-5045-9425-1fd88cdf0791",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "14cd1f67-23b6-52cc-86d5-236d242c489b"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "14cd1f67-23b6-52cc-86d5-236d242c489b",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "fde8c720-c24a-5c5f-bfae-808407523f7e"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fde8c720-c24a-5c5f-bfae-808407523f7e",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a9381b9d-9ff7-5a93-aeb9-6bfe264e5628",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "2505d0fc-aa4e-5cae-b1cf-f4d2152f4b50",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "6493a58f-cda7-556d-ad3f-93218b47db12"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "6493a58f-cda7-556d-ad3f-93218b47db12",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "7b75ec33-db35-521f-be99-0960aa483abb"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "7b75ec33-db35-521f-be99-0960aa483abb",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "02d12f55-28b3-5ac1-b2c2-4272735a3a65"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "02d12f55-28b3-5ac1-b2c2-4272735a3a65",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "a46813e8-c647-54ca-a5b1-cb3e8db17eaa"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "a46813e8-c647-54ca-a5b1-cb3e8db17eaa",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "4243129a-6433-54d9-828b-15134ca43904"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "4243129a-6433-54d9-828b-15134ca43904",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "9c5706c6-0511-51c8-9cbc-f30f7b9baaca"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "9c5706c6-0511-51c8-9cbc-f30f7b9baaca",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "33d8955f-cf34-40f7-910d-0c7399dc3a00"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "33d8955f-cf34-40f7-910d-0c7399dc3a00",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "8dc5ae78-0c30-442d-bfa7-722af2dd7886",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b7d93b0a-2932-44bc-9c13-f83def8b8034",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "62f5f36c-518e-48fc-a20e-8acb7fe4b9ed",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "587b6824-56dc-4cf5-830c-4a785dcce732",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "83dcc151-a468-48b6-8fda-05491b265630",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "ce70b93d-f655-49de-9b59-4d053ef2dfe6"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": false,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "ce70b93d-f655-49de-9b59-4d053ef2dfe6",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "22a3ea40-eb54-440b-b591-8871597df5a4",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "5f6e0c7a-d3b6-4745-af20-a5acdee3a0b8",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "87095e70-a15d-4787-9513-dbfcd77568e7"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "87095e70-a15d-4787-9513-dbfcd77568e7",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "write_zeroes": true,' ' "flush": true,' ' "reset": true,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "nvme_admin": false,' ' "nvme_io": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false' ' }' ' }' '}' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc0]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc0 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p0]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p0 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc1p1]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc1p1 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p0]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p0 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p1]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p1 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p2]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p2 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p3]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p3 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p4]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p4 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p5]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p5 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p6]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p6 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_Malloc2p7]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=Malloc2p7 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_TestPT]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=TestPT 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_raid0]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=raid0 00:08:14.188 12:00:20 -- bdev/blockdev.sh@354 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:08:14.188 12:00:20 -- bdev/blockdev.sh@355 -- # echo '[job_concat0]' 00:08:14.188 12:00:20 -- bdev/blockdev.sh@356 -- # echo filename=concat0 00:08:14.188 12:00:20 -- bdev/blockdev.sh@365 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.188 12:00:20 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:08:14.188 12:00:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:14.188 12:00:20 -- common/autotest_common.sh@10 -- # set +x 00:08:14.188 ************************************ 00:08:14.188 START TEST bdev_fio_trim 00:08:14.188 ************************************ 00:08:14.189 12:00:20 -- common/autotest_common.sh@1104 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.189 12:00:20 -- common/autotest_common.sh@1335 -- # fio_plugin /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.189 12:00:20 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:08:14.189 12:00:20 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:14.189 12:00:20 -- common/autotest_common.sh@1318 -- # local sanitizers 00:08:14.189 12:00:20 -- common/autotest_common.sh@1319 -- # local plugin=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.189 12:00:20 -- common/autotest_common.sh@1320 -- # shift 00:08:14.189 12:00:20 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:08:14.189 12:00:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # grep libasan 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # asan_lib= 00:08:14.189 12:00:20 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:08:14.189 12:00:20 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # ldd /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:08:14.189 12:00:20 -- common/autotest_common.sh@1324 -- # asan_lib= 00:08:14.189 12:00:20 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:08:14.189 12:00:20 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /var/jenkins/workspace/crypto-phy-autotest/spdk/build/fio/spdk_bdev' 00:08:14.189 12:00:20 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/var/jenkins/workspace/crypto-phy-autotest/spdk/../output 00:08:14.189 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:08:14.189 fio-3.35 00:08:14.189 Starting 14 threads 00:08:24.145 00:08:24.145 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=1205159: Thu Jul 25 12:00:31 2024 00:08:24.145 write: IOPS=142k, BW=556MiB/s (582MB/s)(5556MiB/10001msec); 0 zone resets 00:08:24.145 slat (nsec): min=1946, max=971093, avg=34670.53, stdev=9271.40 00:08:24.145 clat (usec): min=21, max=3341, avg=245.77, stdev=82.19 00:08:24.145 lat (usec): min=35, max=3371, avg=280.44, stdev=85.03 00:08:24.145 clat percentiles (usec): 00:08:24.145 | 50.000th=[ 239], 99.000th=[ 420], 99.900th=[ 469], 99.990th=[ 519], 00:08:24.145 | 99.999th=[ 586] 00:08:24.145 bw ( KiB/s): min=512992, max=642672, per=100.00%, avg=569109.05, stdev=2298.89, samples=266 00:08:24.145 iops : min=128248, max=160668, avg=142277.26, stdev=574.72, samples=266 00:08:24.145 trim: IOPS=142k, BW=556MiB/s (582MB/s)(5556MiB/10001msec); 0 zone resets 00:08:24.145 slat (usec): min=3, max=3012, avg=24.19, stdev= 7.10 00:08:24.145 clat (usec): min=3, max=3372, avg=277.80, stdev=88.11 00:08:24.145 lat (usec): min=13, max=3388, avg=301.98, stdev=90.53 00:08:24.145 clat percentiles (usec): 00:08:24.145 | 50.000th=[ 273], 99.000th=[ 461], 99.900th=[ 515], 99.990th=[ 570], 00:08:24.145 | 99.999th=[ 635] 00:08:24.145 bw ( KiB/s): min=512992, max=642680, per=100.00%, avg=569109.47, stdev=2298.99, samples=266 00:08:24.145 iops : min=128248, max=160670, avg=142277.37, stdev=574.75, samples=266 00:08:24.145 lat (usec) : 4=0.01%, 10=0.01%, 20=0.01%, 50=0.13%, 100=1.58% 00:08:24.145 lat (usec) : 250=46.25%, 500=51.93%, 750=0.10% 00:08:24.145 lat (msec) : 2=0.01%, 4=0.01% 00:08:24.145 cpu : usr=99.66%, sys=0.00%, ctx=452, majf=0, minf=994 00:08:24.145 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:24.145 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.145 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:24.145 issued rwts: total=0,1422239,1422243,0 short=0,0,0,0 dropped=0,0,0,0 00:08:24.145 latency : target=0, window=0, percentile=100.00%, depth=8 00:08:24.145 00:08:24.145 Run status group 0 (all jobs): 00:08:24.145 WRITE: bw=556MiB/s (582MB/s), 556MiB/s-556MiB/s (582MB/s-582MB/s), io=5556MiB (5825MB), run=10001-10001msec 00:08:24.145 TRIM: bw=556MiB/s (582MB/s), 556MiB/s-556MiB/s (582MB/s-582MB/s), io=5556MiB (5826MB), run=10001-10001msec 00:08:24.403 00:08:24.403 real 0m11.277s 00:08:24.403 user 2m24.465s 00:08:24.403 sys 0m0.683s 00:08:24.403 12:00:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.403 12:00:31 -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 ************************************ 00:08:24.403 END TEST bdev_fio_trim 00:08:24.403 ************************************ 00:08:24.403 12:00:31 -- bdev/blockdev.sh@366 -- # rm -f 00:08:24.403 12:00:31 -- bdev/blockdev.sh@367 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.fio 00:08:24.403 12:00:31 -- bdev/blockdev.sh@368 -- # popd 00:08:24.403 /var/jenkins/workspace/crypto-phy-autotest/spdk 00:08:24.403 12:00:31 -- bdev/blockdev.sh@369 -- # trap - SIGINT SIGTERM EXIT 00:08:24.403 00:08:24.403 real 0m22.933s 00:08:24.403 user 5m9.225s 00:08:24.403 sys 0m2.289s 00:08:24.403 12:00:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.403 12:00:31 -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 ************************************ 00:08:24.403 END TEST bdev_fio 00:08:24.403 ************************************ 00:08:24.403 12:00:31 -- bdev/blockdev.sh@773 -- # trap cleanup SIGINT SIGTERM EXIT 00:08:24.403 12:00:31 -- bdev/blockdev.sh@775 -- # run_test bdev_verify /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:24.403 12:00:31 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:24.403 12:00:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:24.403 12:00:31 -- common/autotest_common.sh@10 -- # set +x 00:08:24.403 ************************************ 00:08:24.403 START TEST bdev_verify 00:08:24.403 ************************************ 00:08:24.403 12:00:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:08:24.662 [2024-07-25 12:00:31.740058] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:24.662 [2024-07-25 12:00:31.740107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1206667 ] 00:08:24.662 [2024-07-25 12:00:31.829619] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:24.662 [2024-07-25 12:00:31.918196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.662 [2024-07-25 12:00:31.918199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.921 [2024-07-25 12:00:32.061163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:24.921 [2024-07-25 12:00:32.061214] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:24.921 [2024-07-25 12:00:32.061224] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:24.921 [2024-07-25 12:00:32.069188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:24.921 [2024-07-25 12:00:32.069209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:24.921 [2024-07-25 12:00:32.077191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:24.921 [2024-07-25 12:00:32.077209] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:24.921 [2024-07-25 12:00:32.150723] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:24.921 [2024-07-25 12:00:32.150767] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.921 [2024-07-25 12:00:32.150780] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11eec10 00:08:24.921 [2024-07-25 12:00:32.150788] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.921 [2024-07-25 12:00:32.151931] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.921 [2024-07-25 12:00:32.151955] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:25.180 Running I/O for 5 seconds... 00:08:30.481 00:08:30.481 Latency(us) 00:08:30.481 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.481 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x1000 00:08:30.481 Malloc0 : 5.12 2314.17 9.04 0.00 0.00 55010.33 1666.89 122181.90 00:08:30.481 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x1000 length 0x1000 00:08:30.481 Malloc0 : 5.11 2292.70 8.96 0.00 0.00 55398.76 1674.02 170507.58 00:08:30.481 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x800 00:08:30.481 Malloc1p0 : 5.12 1573.47 6.15 0.00 0.00 80834.13 3875.17 111240.24 00:08:30.481 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x800 length 0x800 00:08:30.481 Malloc1p0 : 5.11 1577.39 6.16 0.00 0.00 80538.73 3875.17 107137.11 00:08:30.481 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x800 00:08:30.481 Malloc1p1 : 5.13 1573.13 6.15 0.00 0.00 80731.86 3818.18 106681.21 00:08:30.481 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x800 length 0x800 00:08:30.481 Malloc1p1 : 5.11 1577.07 6.16 0.00 0.00 80429.72 3818.18 102578.09 00:08:30.481 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p0 : 5.13 1572.82 6.14 0.00 0.00 80617.32 3333.79 101666.28 00:08:30.481 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p0 : 5.13 1586.65 6.20 0.00 0.00 80019.22 3362.28 97563.16 00:08:30.481 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p1 : 5.13 1572.50 6.14 0.00 0.00 80518.59 3191.32 98474.96 00:08:30.481 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p1 : 5.13 1585.94 6.20 0.00 0.00 79930.97 3134.33 94371.84 00:08:30.481 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p2 : 5.13 1572.20 6.14 0.00 0.00 80414.66 3789.69 93460.03 00:08:30.481 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p2 : 5.13 1585.60 6.19 0.00 0.00 79829.31 3846.68 89812.81 00:08:30.481 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p3 : 5.13 1571.90 6.14 0.00 0.00 80306.16 3447.76 89812.81 00:08:30.481 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p3 : 5.13 1585.28 6.19 0.00 0.00 79725.82 3433.52 85709.69 00:08:30.481 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p4 : 5.13 1571.59 6.14 0.00 0.00 80211.43 3191.32 85709.69 00:08:30.481 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p4 : 5.14 1584.98 6.19 0.00 0.00 79618.01 3148.58 82062.47 00:08:30.481 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p5 : 5.14 1583.56 6.19 0.00 0.00 79705.87 3105.84 82974.27 00:08:30.481 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p5 : 5.14 1584.69 6.19 0.00 0.00 79538.22 3091.59 78871.15 00:08:30.481 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p6 : 5.14 1582.84 6.18 0.00 0.00 79623.52 3134.33 79782.96 00:08:30.481 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p6 : 5.14 1584.38 6.19 0.00 0.00 79454.33 3105.84 76135.74 00:08:30.481 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x200 00:08:30.481 Malloc2p7 : 5.15 1582.10 6.18 0.00 0.00 79540.71 3177.07 76591.64 00:08:30.481 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x200 length 0x200 00:08:30.481 Malloc2p7 : 5.14 1584.08 6.19 0.00 0.00 79366.43 3248.31 72944.42 00:08:30.481 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x1000 00:08:30.481 TestPT : 5.15 1568.16 6.13 0.00 0.00 80137.22 9346.00 76591.64 00:08:30.481 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x1000 length 0x1000 00:08:30.481 TestPT : 5.14 1553.60 6.07 0.00 0.00 80812.39 8149.26 125829.12 00:08:30.481 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.481 Verification LBA range: start 0x0 length 0x2000 00:08:30.481 raid0 : 5.15 1581.45 6.18 0.00 0.00 79284.63 3205.57 65194.07 00:08:30.482 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x2000 length 0x2000 00:08:30.482 raid0 : 5.14 1583.48 6.19 0.00 0.00 79129.56 3305.29 59267.34 00:08:30.482 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x0 length 0x2000 00:08:30.482 concat0 : 5.15 1580.64 6.17 0.00 0.00 79203.05 3319.54 62002.75 00:08:30.482 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x2000 length 0x2000 00:08:30.482 concat0 : 5.14 1582.77 6.18 0.00 0.00 79041.09 3319.54 58811.44 00:08:30.482 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x0 length 0x1000 00:08:30.482 raid1 : 5.15 1579.78 6.17 0.00 0.00 79116.50 3561.74 60179.14 00:08:30.482 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x1000 length 0x1000 00:08:30.482 raid1 : 5.15 1599.67 6.25 0.00 0.00 78279.35 1018.66 59267.34 00:08:30.482 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x0 length 0x4e2 00:08:30.482 AIO0 : 5.15 1579.55 6.17 0.00 0.00 79017.07 2379.24 60635.05 00:08:30.482 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:08:30.482 Verification LBA range: start 0x4e2 length 0x4e2 00:08:30.482 AIO0 : 5.15 1598.88 6.25 0.00 0.00 78198.95 2450.48 59723.24 00:08:30.482 =================================================================================================================== 00:08:30.482 Total : 52007.03 203.15 0.00 0.00 77599.98 1018.66 170507.58 00:08:30.741 00:08:30.741 real 0m6.262s 00:08:30.741 user 0m11.715s 00:08:30.741 sys 0m0.340s 00:08:30.741 12:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:30.741 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.741 ************************************ 00:08:30.741 END TEST bdev_verify 00:08:30.741 ************************************ 00:08:30.741 12:00:37 -- bdev/blockdev.sh@776 -- # run_test bdev_verify_big_io /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:30.741 12:00:37 -- common/autotest_common.sh@1077 -- # '[' 16 -le 1 ']' 00:08:30.741 12:00:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:30.741 12:00:37 -- common/autotest_common.sh@10 -- # set +x 00:08:30.741 ************************************ 00:08:30.741 START TEST bdev_verify_big_io 00:08:30.741 ************************************ 00:08:30.741 12:00:37 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:08:30.741 [2024-07-25 12:00:38.043278] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:30.741 [2024-07-25 12:00:38.043326] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1207535 ] 00:08:30.999 [2024-07-25 12:00:38.128414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:30.999 [2024-07-25 12:00:38.211280] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.999 [2024-07-25 12:00:38.211270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:31.256 [2024-07-25 12:00:38.351180] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:31.256 [2024-07-25 12:00:38.351225] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:31.256 [2024-07-25 12:00:38.351235] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:31.256 [2024-07-25 12:00:38.359195] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:31.256 [2024-07-25 12:00:38.359218] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:31.256 [2024-07-25 12:00:38.367211] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:31.256 [2024-07-25 12:00:38.367229] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:31.256 [2024-07-25 12:00:38.435670] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:31.256 [2024-07-25 12:00:38.435712] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.256 [2024-07-25 12:00:38.435725] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x103fc10 00:08:31.256 [2024-07-25 12:00:38.435733] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.256 [2024-07-25 12:00:38.436859] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.256 [2024-07-25 12:00:38.436884] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:31.515 [2024-07-25 12:00:38.597133] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.597894] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.599119] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.599873] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.601091] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.601838] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.603072] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.604289] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.605001] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.606173] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.606897] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.608068] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.608757] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.609963] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.610696] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.611887] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:08:31.515 [2024-07-25 12:00:38.632792] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:31.515 [2024-07-25 12:00:38.634566] bdevperf.c:1807:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:08:31.515 Running I/O for 5 seconds... 00:08:38.075 00:08:38.075 Latency(us) 00:08:38.075 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.075 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x100 00:08:38.075 Malloc0 : 5.49 411.39 25.71 0.00 0.00 301676.94 18350.08 824271.92 00:08:38.075 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x100 length 0x100 00:08:38.075 Malloc0 : 5.49 411.39 25.71 0.00 0.00 302327.64 15614.66 977455.19 00:08:38.075 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x80 00:08:38.075 Malloc1p0 : 5.49 294.18 18.39 0.00 0.00 416584.18 43994.60 970160.75 00:08:38.075 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x80 length 0x80 00:08:38.075 Malloc1p0 : 5.57 237.55 14.85 0.00 0.00 515364.79 43766.65 864391.35 00:08:38.075 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x80 00:08:38.075 Malloc1p1 : 5.78 137.70 8.61 0.00 0.00 874523.44 36016.31 1838199.32 00:08:38.075 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x80 length 0x80 00:08:38.075 Malloc1p1 : 5.67 146.22 9.14 0.00 0.00 831255.64 36016.31 1779843.78 00:08:38.075 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p0 : 5.57 76.99 4.81 0.00 0.00 391364.09 5955.23 645558.09 00:08:38.075 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p0 : 5.57 81.02 5.06 0.00 0.00 372064.64 5926.73 561672.01 00:08:38.075 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p1 : 5.57 76.95 4.81 0.00 0.00 390108.24 5955.23 630969.21 00:08:38.075 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p1 : 5.57 81.00 5.06 0.00 0.00 370653.31 5869.75 550730.35 00:08:38.075 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p2 : 5.58 76.93 4.81 0.00 0.00 388695.63 5955.23 620027.55 00:08:38.075 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p2 : 5.57 80.99 5.06 0.00 0.00 369387.06 5869.75 539788.69 00:08:38.075 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p3 : 5.58 76.92 4.81 0.00 0.00 387274.19 5955.23 609085.89 00:08:38.075 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p3 : 5.57 80.98 5.06 0.00 0.00 368044.60 6012.22 528847.03 00:08:38.075 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p4 : 5.58 76.91 4.81 0.00 0.00 385952.71 6126.19 594497.00 00:08:38.075 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p4 : 5.57 80.97 5.06 0.00 0.00 366723.77 6154.69 517905.36 00:08:38.075 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p5 : 5.58 76.90 4.81 0.00 0.00 384477.07 6097.70 583555.34 00:08:38.075 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p5 : 5.57 80.96 5.06 0.00 0.00 365356.95 6069.20 506963.70 00:08:38.075 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p6 : 5.58 76.89 4.81 0.00 0.00 383129.82 5983.72 572613.68 00:08:38.075 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p6 : 5.57 80.94 5.06 0.00 0.00 364053.46 6012.22 492374.82 00:08:38.075 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x20 00:08:38.075 Malloc2p7 : 5.62 80.23 5.01 0.00 0.00 367938.11 6154.69 561672.01 00:08:38.075 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x20 length 0x20 00:08:38.075 Malloc2p7 : 5.57 80.92 5.06 0.00 0.00 362664.68 6183.18 481433.15 00:08:38.075 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x100 00:08:38.075 TestPT : 5.82 142.55 8.91 0.00 0.00 811235.25 41259.19 1809021.55 00:08:38.075 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x100 length 0x100 00:08:38.075 TestPT : 5.76 138.25 8.64 0.00 0.00 837481.22 51516.99 1794432.67 00:08:38.075 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x200 00:08:38.075 raid0 : 5.79 143.24 8.95 0.00 0.00 796620.38 34420.65 1823610.43 00:08:38.075 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x200 length 0x200 00:08:38.075 raid0 : 5.77 148.74 9.30 0.00 0.00 771983.79 39435.58 1757960.46 00:08:38.075 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x200 00:08:38.075 concat0 : 5.79 153.58 9.60 0.00 0.00 737593.50 32369.09 1838199.32 00:08:38.075 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x200 length 0x200 00:08:38.075 concat0 : 5.77 154.09 9.63 0.00 0.00 735422.96 28949.82 1750666.02 00:08:38.075 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x100 00:08:38.075 raid1 : 5.78 186.90 11.68 0.00 0.00 599801.59 10257.81 1852788.20 00:08:38.075 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x100 length 0x100 00:08:38.075 raid1 : 5.77 174.97 10.94 0.00 0.00 641804.01 18692.01 1757960.46 00:08:38.075 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x0 length 0x4e 00:08:38.075 AIO0 : 5.79 159.33 9.96 0.00 0.00 422494.73 1310.72 1064988.49 00:08:38.075 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:08:38.075 Verification LBA range: start 0x4e length 0x4e 00:08:38.075 AIO0 : 5.77 173.13 10.82 0.00 0.00 391466.84 2165.54 1013927.40 00:08:38.075 =================================================================================================================== 00:08:38.075 Total : 4479.72 279.98 0.00 0.00 507362.56 1310.72 1852788.20 00:08:38.075 00:08:38.075 real 0m6.926s 00:08:38.075 user 0m13.023s 00:08:38.075 sys 0m0.371s 00:08:38.075 12:00:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:38.075 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.075 ************************************ 00:08:38.075 END TEST bdev_verify_big_io 00:08:38.075 ************************************ 00:08:38.075 12:00:44 -- bdev/blockdev.sh@777 -- # run_test bdev_write_zeroes /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.075 12:00:44 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:38.075 12:00:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:38.075 12:00:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.075 ************************************ 00:08:38.075 START TEST bdev_write_zeroes 00:08:38.075 ************************************ 00:08:38.075 12:00:44 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:38.075 [2024-07-25 12:00:45.014703] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:38.075 [2024-07-25 12:00:45.014750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208447 ] 00:08:38.075 [2024-07-25 12:00:45.097396] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.075 [2024-07-25 12:00:45.180524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.075 [2024-07-25 12:00:45.330784] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:38.075 [2024-07-25 12:00:45.330838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:08:38.075 [2024-07-25 12:00:45.330847] vbdev_passthru.c: 731:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:08:38.075 [2024-07-25 12:00:45.338796] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:38.075 [2024-07-25 12:00:45.338816] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:08:38.075 [2024-07-25 12:00:45.346807] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:38.075 [2024-07-25 12:00:45.346825] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:08:38.333 [2024-07-25 12:00:45.419747] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:08:38.333 [2024-07-25 12:00:45.419791] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:38.333 [2024-07-25 12:00:45.419804] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13d6ac0 00:08:38.334 [2024-07-25 12:00:45.419812] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:38.334 [2024-07-25 12:00:45.420808] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:38.334 [2024-07-25 12:00:45.420832] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:08:38.334 Running I/O for 1 seconds... 00:08:39.714 00:08:39.714 Latency(us) 00:08:39.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.714 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc0 : 1.02 7926.57 30.96 0.00 0.00 16144.53 454.12 27582.11 00:08:39.714 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc1p0 : 1.03 7922.02 30.95 0.00 0.00 16131.83 594.81 27012.23 00:08:39.714 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc1p1 : 1.04 7914.77 30.92 0.00 0.00 16126.33 598.37 26442.35 00:08:39.714 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p0 : 1.04 7907.69 30.89 0.00 0.00 16116.06 594.81 25872.47 00:08:39.714 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p1 : 1.04 7900.69 30.86 0.00 0.00 16111.49 594.81 25302.59 00:08:39.714 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p2 : 1.04 7893.72 30.83 0.00 0.00 16103.57 598.37 24732.72 00:08:39.714 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p3 : 1.04 7886.71 30.81 0.00 0.00 16093.65 594.81 24162.84 00:08:39.714 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p4 : 1.04 7879.76 30.78 0.00 0.00 16082.66 591.25 23592.96 00:08:39.714 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p5 : 1.04 7872.81 30.75 0.00 0.00 16070.99 655.36 22909.11 00:08:39.714 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p6 : 1.04 7865.79 30.73 0.00 0.00 16063.71 623.30 22225.25 00:08:39.714 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 Malloc2p7 : 1.04 7858.86 30.70 0.00 0.00 16051.12 598.37 21655.37 00:08:39.714 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 TestPT : 1.04 7851.86 30.67 0.00 0.00 16040.77 655.36 20971.52 00:08:39.714 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 raid0 : 1.04 7843.54 30.64 0.00 0.00 16027.36 990.16 19945.74 00:08:39.714 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 concat0 : 1.05 7835.74 30.61 0.00 0.00 16006.46 997.29 18919.96 00:08:39.714 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 raid1 : 1.05 7826.08 30.57 0.00 0.00 15983.26 1666.89 17438.27 00:08:39.714 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:08:39.714 AIO0 : 1.05 7820.05 30.55 0.00 0.00 15947.12 701.66 17324.30 00:08:39.714 =================================================================================================================== 00:08:39.714 Total : 126006.66 492.21 0.00 0.00 16068.73 454.12 27582.11 00:08:39.973 00:08:39.973 real 0m2.081s 00:08:39.973 user 0m1.707s 00:08:39.973 sys 0m0.307s 00:08:39.973 12:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.973 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:39.973 ************************************ 00:08:39.973 END TEST bdev_write_zeroes 00:08:39.973 ************************************ 00:08:39.973 12:00:47 -- bdev/blockdev.sh@780 -- # run_test bdev_json_nonenclosed /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.973 12:00:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:39.973 12:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.973 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:39.973 ************************************ 00:08:39.973 START TEST bdev_json_nonenclosed 00:08:39.973 ************************************ 00:08:39.973 12:00:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:39.973 [2024-07-25 12:00:47.141713] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:39.973 [2024-07-25 12:00:47.141770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208814 ] 00:08:39.973 [2024-07-25 12:00:47.228107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.232 [2024-07-25 12:00:47.315387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.232 [2024-07-25 12:00:47.315487] json_config.c: 595:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:08:40.232 [2024-07-25 12:00:47.315505] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.232 00:08:40.232 real 0m0.332s 00:08:40.232 user 0m0.207s 00:08:40.232 sys 0m0.123s 00:08:40.232 12:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.232 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.232 ************************************ 00:08:40.232 END TEST bdev_json_nonenclosed 00:08:40.232 ************************************ 00:08:40.232 12:00:47 -- bdev/blockdev.sh@783 -- # run_test bdev_json_nonarray /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.232 12:00:47 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:08:40.232 12:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.232 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.232 ************************************ 00:08:40.232 START TEST bdev_json_nonarray 00:08:40.232 ************************************ 00:08:40.232 12:00:47 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:08:40.232 [2024-07-25 12:00:47.518160] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:40.232 [2024-07-25 12:00:47.518208] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208843 ] 00:08:40.491 [2024-07-25 12:00:47.603129] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.491 [2024-07-25 12:00:47.684314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.491 [2024-07-25 12:00:47.684412] json_config.c: 601:spdk_subsystem_init_from_json_config: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:08:40.491 [2024-07-25 12:00:47.684428] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:40.491 00:08:40.491 real 0m0.319s 00:08:40.491 user 0m0.204s 00:08:40.491 sys 0m0.113s 00:08:40.491 12:00:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.491 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.491 ************************************ 00:08:40.491 END TEST bdev_json_nonarray 00:08:40.491 ************************************ 00:08:40.750 12:00:47 -- bdev/blockdev.sh@785 -- # [[ bdev == bdev ]] 00:08:40.750 12:00:47 -- bdev/blockdev.sh@786 -- # run_test bdev_qos qos_test_suite '' 00:08:40.750 12:00:47 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:40.750 12:00:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.750 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.750 ************************************ 00:08:40.750 START TEST bdev_qos 00:08:40.750 ************************************ 00:08:40.750 12:00:47 -- common/autotest_common.sh@1104 -- # qos_test_suite '' 00:08:40.750 12:00:47 -- bdev/blockdev.sh@444 -- # QOS_PID=1208866 00:08:40.750 12:00:47 -- bdev/blockdev.sh@445 -- # echo 'Process qos testing pid: 1208866' 00:08:40.750 Process qos testing pid: 1208866 00:08:40.750 12:00:47 -- bdev/blockdev.sh@446 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:08:40.750 12:00:47 -- bdev/blockdev.sh@447 -- # waitforlisten 1208866 00:08:40.750 12:00:47 -- common/autotest_common.sh@819 -- # '[' -z 1208866 ']' 00:08:40.750 12:00:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.750 12:00:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:40.750 12:00:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.750 12:00:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:40.750 12:00:47 -- common/autotest_common.sh@10 -- # set +x 00:08:40.750 12:00:47 -- bdev/blockdev.sh@443 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:08:40.750 [2024-07-25 12:00:47.875888] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:08:40.750 [2024-07-25 12:00:47.875941] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1208866 ] 00:08:40.750 [2024-07-25 12:00:47.964123] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.750 [2024-07-25 12:00:48.054861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:41.715 12:00:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:41.715 12:00:48 -- common/autotest_common.sh@852 -- # return 0 00:08:41.715 12:00:48 -- bdev/blockdev.sh@449 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:08:41.715 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.715 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 Malloc_0 00:08:41.715 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.715 12:00:48 -- bdev/blockdev.sh@450 -- # waitforbdev Malloc_0 00:08:41.715 12:00:48 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_0 00:08:41.715 12:00:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:41.715 12:00:48 -- common/autotest_common.sh@889 -- # local i 00:08:41.715 12:00:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:41.715 12:00:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:41.715 12:00:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:08:41.715 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.715 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.715 12:00:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:08:41.715 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.715 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 [ 00:08:41.715 { 00:08:41.715 "name": "Malloc_0", 00:08:41.715 "aliases": [ 00:08:41.715 "bd37b6e9-8ffd-458d-bc79-dd14d3ecfb0f" 00:08:41.715 ], 00:08:41.715 "product_name": "Malloc disk", 00:08:41.715 "block_size": 512, 00:08:41.715 "num_blocks": 262144, 00:08:41.715 "uuid": "bd37b6e9-8ffd-458d-bc79-dd14d3ecfb0f", 00:08:41.715 "assigned_rate_limits": { 00:08:41.715 "rw_ios_per_sec": 0, 00:08:41.715 "rw_mbytes_per_sec": 0, 00:08:41.715 "r_mbytes_per_sec": 0, 00:08:41.715 "w_mbytes_per_sec": 0 00:08:41.715 }, 00:08:41.715 "claimed": false, 00:08:41.715 "zoned": false, 00:08:41.715 "supported_io_types": { 00:08:41.715 "read": true, 00:08:41.715 "write": true, 00:08:41.715 "unmap": true, 00:08:41.715 "write_zeroes": true, 00:08:41.715 "flush": true, 00:08:41.715 "reset": true, 00:08:41.715 "compare": false, 00:08:41.715 "compare_and_write": false, 00:08:41.715 "abort": true, 00:08:41.715 "nvme_admin": false, 00:08:41.715 "nvme_io": false 00:08:41.715 }, 00:08:41.715 "memory_domains": [ 00:08:41.715 { 00:08:41.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.715 "dma_device_type": 2 00:08:41.715 } 00:08:41.715 ], 00:08:41.715 "driver_specific": {} 00:08:41.715 } 00:08:41.715 ] 00:08:41.715 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.715 12:00:48 -- common/autotest_common.sh@895 -- # return 0 00:08:41.715 12:00:48 -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_null_create Null_1 128 512 00:08:41.715 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.715 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 Null_1 00:08:41.715 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.715 12:00:48 -- bdev/blockdev.sh@452 -- # waitforbdev Null_1 00:08:41.715 12:00:48 -- common/autotest_common.sh@887 -- # local bdev_name=Null_1 00:08:41.715 12:00:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:08:41.715 12:00:48 -- common/autotest_common.sh@889 -- # local i 00:08:41.715 12:00:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:08:41.715 12:00:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:08:41.715 12:00:48 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:08:41.715 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.715 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.715 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.715 12:00:48 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:08:41.716 12:00:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:41.716 12:00:48 -- common/autotest_common.sh@10 -- # set +x 00:08:41.716 [ 00:08:41.716 { 00:08:41.716 "name": "Null_1", 00:08:41.716 "aliases": [ 00:08:41.716 "9f84053d-1c76-4a5d-b803-879286b297f5" 00:08:41.716 ], 00:08:41.716 "product_name": "Null disk", 00:08:41.716 "block_size": 512, 00:08:41.716 "num_blocks": 262144, 00:08:41.716 "uuid": "9f84053d-1c76-4a5d-b803-879286b297f5", 00:08:41.716 "assigned_rate_limits": { 00:08:41.716 "rw_ios_per_sec": 0, 00:08:41.716 "rw_mbytes_per_sec": 0, 00:08:41.716 "r_mbytes_per_sec": 0, 00:08:41.716 "w_mbytes_per_sec": 0 00:08:41.716 }, 00:08:41.716 "claimed": false, 00:08:41.716 "zoned": false, 00:08:41.716 "supported_io_types": { 00:08:41.716 "read": true, 00:08:41.716 "write": true, 00:08:41.716 "unmap": false, 00:08:41.716 "write_zeroes": true, 00:08:41.716 "flush": false, 00:08:41.716 "reset": true, 00:08:41.716 "compare": false, 00:08:41.716 "compare_and_write": false, 00:08:41.716 "abort": true, 00:08:41.716 "nvme_admin": false, 00:08:41.716 "nvme_io": false 00:08:41.716 }, 00:08:41.716 "driver_specific": {} 00:08:41.716 } 00:08:41.716 ] 00:08:41.716 12:00:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:41.716 12:00:48 -- common/autotest_common.sh@895 -- # return 0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@455 -- # qos_function_test 00:08:41.716 12:00:48 -- bdev/blockdev.sh@408 -- # local qos_lower_iops_limit=1000 00:08:41.716 12:00:48 -- bdev/blockdev.sh@409 -- # local qos_lower_bw_limit=2 00:08:41.716 12:00:48 -- bdev/blockdev.sh@410 -- # local io_result=0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@411 -- # local iops_limit=0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@412 -- # local bw_limit=0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@454 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:41.716 12:00:48 -- bdev/blockdev.sh@414 -- # get_io_result IOPS Malloc_0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:08:41.716 12:00:48 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@375 -- # local iostat_result 00:08:41.716 12:00:48 -- bdev/blockdev.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:41.716 12:00:48 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:08:41.716 12:00:48 -- bdev/blockdev.sh@376 -- # tail -1 00:08:41.716 Running I/O for 60 seconds... 00:08:46.996 12:00:53 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 104237.84 416951.35 0.00 0.00 419840.00 0.00 0.00 ' 00:08:46.996 12:00:53 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:08:46.996 12:00:53 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:08:46.996 12:00:53 -- bdev/blockdev.sh@378 -- # iostat_result=104237.84 00:08:46.996 12:00:53 -- bdev/blockdev.sh@383 -- # echo 104237 00:08:46.996 12:00:53 -- bdev/blockdev.sh@414 -- # io_result=104237 00:08:46.996 12:00:53 -- bdev/blockdev.sh@416 -- # iops_limit=26000 00:08:46.996 12:00:53 -- bdev/blockdev.sh@417 -- # '[' 26000 -gt 1000 ']' 00:08:46.996 12:00:53 -- bdev/blockdev.sh@420 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 26000 Malloc_0 00:08:46.996 12:00:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.996 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.996 12:00:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.996 12:00:53 -- bdev/blockdev.sh@421 -- # run_test bdev_qos_iops run_qos_test 26000 IOPS Malloc_0 00:08:46.996 12:00:53 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:46.996 12:00:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:46.996 12:00:53 -- common/autotest_common.sh@10 -- # set +x 00:08:46.996 ************************************ 00:08:46.996 START TEST bdev_qos_iops 00:08:46.996 ************************************ 00:08:46.996 12:00:53 -- common/autotest_common.sh@1104 -- # run_qos_test 26000 IOPS Malloc_0 00:08:46.996 12:00:53 -- bdev/blockdev.sh@387 -- # local qos_limit=26000 00:08:46.996 12:00:53 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:08:46.996 12:00:53 -- bdev/blockdev.sh@390 -- # get_io_result IOPS Malloc_0 00:08:46.996 12:00:53 -- bdev/blockdev.sh@373 -- # local limit_type=IOPS 00:08:46.996 12:00:53 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:08:46.996 12:00:53 -- bdev/blockdev.sh@375 -- # local iostat_result 00:08:46.996 12:00:53 -- bdev/blockdev.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:46.996 12:00:53 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:08:46.996 12:00:53 -- bdev/blockdev.sh@376 -- # tail -1 00:08:52.272 12:00:59 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 25997.47 103989.87 0.00 0.00 104728.00 0.00 0.00 ' 00:08:52.272 12:00:59 -- bdev/blockdev.sh@377 -- # '[' IOPS = IOPS ']' 00:08:52.272 12:00:59 -- bdev/blockdev.sh@378 -- # awk '{print $2}' 00:08:52.272 12:00:59 -- bdev/blockdev.sh@378 -- # iostat_result=25997.47 00:08:52.272 12:00:59 -- bdev/blockdev.sh@383 -- # echo 25997 00:08:52.272 12:00:59 -- bdev/blockdev.sh@390 -- # qos_result=25997 00:08:52.272 12:00:59 -- bdev/blockdev.sh@391 -- # '[' IOPS = BANDWIDTH ']' 00:08:52.272 12:00:59 -- bdev/blockdev.sh@394 -- # lower_limit=23400 00:08:52.272 12:00:59 -- bdev/blockdev.sh@395 -- # upper_limit=28600 00:08:52.272 12:00:59 -- bdev/blockdev.sh@398 -- # '[' 25997 -lt 23400 ']' 00:08:52.272 12:00:59 -- bdev/blockdev.sh@398 -- # '[' 25997 -gt 28600 ']' 00:08:52.272 00:08:52.272 real 0m5.174s 00:08:52.272 user 0m0.086s 00:08:52.272 sys 0m0.043s 00:08:52.272 12:00:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:52.272 12:00:59 -- common/autotest_common.sh@10 -- # set +x 00:08:52.272 ************************************ 00:08:52.272 END TEST bdev_qos_iops 00:08:52.272 ************************************ 00:08:52.272 12:00:59 -- bdev/blockdev.sh@425 -- # get_io_result BANDWIDTH Null_1 00:08:52.272 12:00:59 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:08:52.272 12:00:59 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:08:52.272 12:00:59 -- bdev/blockdev.sh@375 -- # local iostat_result 00:08:52.272 12:00:59 -- bdev/blockdev.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:52.272 12:00:59 -- bdev/blockdev.sh@376 -- # grep Null_1 00:08:52.272 12:00:59 -- bdev/blockdev.sh@376 -- # tail -1 00:08:57.537 12:01:04 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 29737.36 118949.43 0.00 0.00 119808.00 0.00 0.00 ' 00:08:57.537 12:01:04 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:08:57.537 12:01:04 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:08:57.537 12:01:04 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:08:57.537 12:01:04 -- bdev/blockdev.sh@380 -- # iostat_result=119808.00 00:08:57.537 12:01:04 -- bdev/blockdev.sh@383 -- # echo 119808 00:08:57.537 12:01:04 -- bdev/blockdev.sh@425 -- # bw_limit=119808 00:08:57.537 12:01:04 -- bdev/blockdev.sh@426 -- # bw_limit=11 00:08:57.537 12:01:04 -- bdev/blockdev.sh@427 -- # '[' 11 -lt 2 ']' 00:08:57.537 12:01:04 -- bdev/blockdev.sh@430 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 11 Null_1 00:08:57.537 12:01:04 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:57.537 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:57.537 12:01:04 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:57.537 12:01:04 -- bdev/blockdev.sh@431 -- # run_test bdev_qos_bw run_qos_test 11 BANDWIDTH Null_1 00:08:57.537 12:01:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:08:57.537 12:01:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:57.537 12:01:04 -- common/autotest_common.sh@10 -- # set +x 00:08:57.537 ************************************ 00:08:57.537 START TEST bdev_qos_bw 00:08:57.537 ************************************ 00:08:57.537 12:01:04 -- common/autotest_common.sh@1104 -- # run_qos_test 11 BANDWIDTH Null_1 00:08:57.537 12:01:04 -- bdev/blockdev.sh@387 -- # local qos_limit=11 00:08:57.537 12:01:04 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:08:57.537 12:01:04 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Null_1 00:08:57.537 12:01:04 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:08:57.537 12:01:04 -- bdev/blockdev.sh@374 -- # local qos_dev=Null_1 00:08:57.537 12:01:04 -- bdev/blockdev.sh@375 -- # local iostat_result 00:08:57.537 12:01:04 -- bdev/blockdev.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:08:57.537 12:01:04 -- bdev/blockdev.sh@376 -- # grep Null_1 00:08:57.537 12:01:04 -- bdev/blockdev.sh@376 -- # tail -1 00:09:02.805 12:01:09 -- bdev/blockdev.sh@376 -- # iostat_result='Null_1 2817.70 11270.81 0.00 0.00 11432.00 0.00 0.00 ' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@380 -- # iostat_result=11432.00 00:09:02.805 12:01:09 -- bdev/blockdev.sh@383 -- # echo 11432 00:09:02.805 12:01:09 -- bdev/blockdev.sh@390 -- # qos_result=11432 00:09:02.805 12:01:09 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@392 -- # qos_limit=11264 00:09:02.805 12:01:09 -- bdev/blockdev.sh@394 -- # lower_limit=10137 00:09:02.805 12:01:09 -- bdev/blockdev.sh@395 -- # upper_limit=12390 00:09:02.805 12:01:09 -- bdev/blockdev.sh@398 -- # '[' 11432 -lt 10137 ']' 00:09:02.805 12:01:09 -- bdev/blockdev.sh@398 -- # '[' 11432 -gt 12390 ']' 00:09:02.805 00:09:02.805 real 0m5.204s 00:09:02.805 user 0m0.082s 00:09:02.805 sys 0m0.048s 00:09:02.805 12:01:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:02.805 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.805 ************************************ 00:09:02.805 END TEST bdev_qos_bw 00:09:02.805 ************************************ 00:09:02.805 12:01:09 -- bdev/blockdev.sh@434 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:09:02.805 12:01:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:02.805 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.805 12:01:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:02.805 12:01:09 -- bdev/blockdev.sh@435 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:09:02.805 12:01:09 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:02.805 12:01:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:02.805 12:01:09 -- common/autotest_common.sh@10 -- # set +x 00:09:02.805 ************************************ 00:09:02.805 START TEST bdev_qos_ro_bw 00:09:02.805 ************************************ 00:09:02.805 12:01:09 -- common/autotest_common.sh@1104 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:09:02.805 12:01:09 -- bdev/blockdev.sh@387 -- # local qos_limit=2 00:09:02.805 12:01:09 -- bdev/blockdev.sh@388 -- # local qos_result=0 00:09:02.805 12:01:09 -- bdev/blockdev.sh@390 -- # get_io_result BANDWIDTH Malloc_0 00:09:02.805 12:01:09 -- bdev/blockdev.sh@373 -- # local limit_type=BANDWIDTH 00:09:02.805 12:01:09 -- bdev/blockdev.sh@374 -- # local qos_dev=Malloc_0 00:09:02.805 12:01:09 -- bdev/blockdev.sh@375 -- # local iostat_result 00:09:02.805 12:01:09 -- bdev/blockdev.sh@376 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/iostat.py -d -i 1 -t 5 00:09:02.805 12:01:09 -- bdev/blockdev.sh@376 -- # tail -1 00:09:02.805 12:01:09 -- bdev/blockdev.sh@376 -- # grep Malloc_0 00:09:08.070 12:01:14 -- bdev/blockdev.sh@376 -- # iostat_result='Malloc_0 512.17 2048.66 0.00 0.00 2060.00 0.00 0.00 ' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@377 -- # '[' BANDWIDTH = IOPS ']' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@380 -- # awk '{print $6}' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@380 -- # iostat_result=2060.00 00:09:08.070 12:01:14 -- bdev/blockdev.sh@383 -- # echo 2060 00:09:08.070 12:01:14 -- bdev/blockdev.sh@390 -- # qos_result=2060 00:09:08.070 12:01:14 -- bdev/blockdev.sh@391 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@392 -- # qos_limit=2048 00:09:08.070 12:01:14 -- bdev/blockdev.sh@394 -- # lower_limit=1843 00:09:08.070 12:01:14 -- bdev/blockdev.sh@395 -- # upper_limit=2252 00:09:08.070 12:01:14 -- bdev/blockdev.sh@398 -- # '[' 2060 -lt 1843 ']' 00:09:08.070 12:01:14 -- bdev/blockdev.sh@398 -- # '[' 2060 -gt 2252 ']' 00:09:08.070 00:09:08.070 real 0m5.159s 00:09:08.070 user 0m0.087s 00:09:08.070 sys 0m0.041s 00:09:08.070 12:01:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.070 12:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:08.070 ************************************ 00:09:08.070 END TEST bdev_qos_ro_bw 00:09:08.070 ************************************ 00:09:08.070 12:01:14 -- bdev/blockdev.sh@457 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:09:08.070 12:01:14 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:08.070 12:01:14 -- common/autotest_common.sh@10 -- # set +x 00:09:08.070 12:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:08.070 12:01:15 -- bdev/blockdev.sh@458 -- # rpc_cmd bdev_null_delete Null_1 00:09:08.070 12:01:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:08.070 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:08.328 00:09:08.328 Latency(us) 00:09:08.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.328 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:08.328 Malloc_0 : 26.45 35498.68 138.67 0.00 0.00 7143.00 1324.97 503316.48 00:09:08.328 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:08.328 Null_1 : 26.57 32889.48 128.47 0.00 0.00 7765.55 470.15 110328.43 00:09:08.328 =================================================================================================================== 00:09:08.328 Total : 68388.16 267.14 0.00 0.00 7443.06 470.15 503316.48 00:09:08.328 0 00:09:08.328 12:01:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:08.328 12:01:15 -- bdev/blockdev.sh@459 -- # killprocess 1208866 00:09:08.328 12:01:15 -- common/autotest_common.sh@926 -- # '[' -z 1208866 ']' 00:09:08.328 12:01:15 -- common/autotest_common.sh@930 -- # kill -0 1208866 00:09:08.328 12:01:15 -- common/autotest_common.sh@931 -- # uname 00:09:08.328 12:01:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:08.328 12:01:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1208866 00:09:08.328 12:01:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:08.328 12:01:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:08.328 12:01:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1208866' 00:09:08.328 killing process with pid 1208866 00:09:08.328 12:01:15 -- common/autotest_common.sh@945 -- # kill 1208866 00:09:08.328 Received shutdown signal, test time was about 26.622998 seconds 00:09:08.328 00:09:08.328 Latency(us) 00:09:08.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:08.328 =================================================================================================================== 00:09:08.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:08.328 12:01:15 -- common/autotest_common.sh@950 -- # wait 1208866 00:09:08.587 12:01:15 -- bdev/blockdev.sh@460 -- # trap - SIGINT SIGTERM EXIT 00:09:08.587 00:09:08.587 real 0m27.917s 00:09:08.587 user 0m28.375s 00:09:08.587 sys 0m0.710s 00:09:08.587 12:01:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:08.587 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:08.587 ************************************ 00:09:08.587 END TEST bdev_qos 00:09:08.587 ************************************ 00:09:08.587 12:01:15 -- bdev/blockdev.sh@787 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:09:08.587 12:01:15 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:08.587 12:01:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:08.587 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:08.587 ************************************ 00:09:08.587 START TEST bdev_qd_sampling 00:09:08.587 ************************************ 00:09:08.587 12:01:15 -- common/autotest_common.sh@1104 -- # qd_sampling_test_suite '' 00:09:08.587 12:01:15 -- bdev/blockdev.sh@536 -- # QD_DEV=Malloc_QD 00:09:08.587 12:01:15 -- bdev/blockdev.sh@539 -- # QD_PID=1212724 00:09:08.587 12:01:15 -- bdev/blockdev.sh@540 -- # echo 'Process bdev QD sampling period testing pid: 1212724' 00:09:08.587 Process bdev QD sampling period testing pid: 1212724 00:09:08.587 12:01:15 -- bdev/blockdev.sh@541 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:09:08.587 12:01:15 -- bdev/blockdev.sh@542 -- # waitforlisten 1212724 00:09:08.587 12:01:15 -- common/autotest_common.sh@819 -- # '[' -z 1212724 ']' 00:09:08.587 12:01:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:08.587 12:01:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:08.587 12:01:15 -- bdev/blockdev.sh@538 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:09:08.587 12:01:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:08.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:08.587 12:01:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:08.587 12:01:15 -- common/autotest_common.sh@10 -- # set +x 00:09:08.587 [2024-07-25 12:01:15.841078] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:08.587 [2024-07-25 12:01:15.841131] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1212724 ] 00:09:08.846 [2024-07-25 12:01:15.928792] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:08.846 [2024-07-25 12:01:16.018375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:08.846 [2024-07-25 12:01:16.018379] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.410 12:01:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:09.410 12:01:16 -- common/autotest_common.sh@852 -- # return 0 00:09:09.410 12:01:16 -- bdev/blockdev.sh@544 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:09:09.410 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.410 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:09.410 Malloc_QD 00:09:09.410 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.410 12:01:16 -- bdev/blockdev.sh@545 -- # waitforbdev Malloc_QD 00:09:09.410 12:01:16 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_QD 00:09:09.410 12:01:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:09.410 12:01:16 -- common/autotest_common.sh@889 -- # local i 00:09:09.410 12:01:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:09.410 12:01:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:09.410 12:01:16 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:09.410 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.410 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:09.410 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.410 12:01:16 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:09:09.410 12:01:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:09.410 12:01:16 -- common/autotest_common.sh@10 -- # set +x 00:09:09.410 [ 00:09:09.410 { 00:09:09.410 "name": "Malloc_QD", 00:09:09.410 "aliases": [ 00:09:09.410 "26abf0c5-4cec-48d0-b804-f99bcb26f7b8" 00:09:09.410 ], 00:09:09.410 "product_name": "Malloc disk", 00:09:09.410 "block_size": 512, 00:09:09.410 "num_blocks": 262144, 00:09:09.410 "uuid": "26abf0c5-4cec-48d0-b804-f99bcb26f7b8", 00:09:09.410 "assigned_rate_limits": { 00:09:09.410 "rw_ios_per_sec": 0, 00:09:09.410 "rw_mbytes_per_sec": 0, 00:09:09.410 "r_mbytes_per_sec": 0, 00:09:09.410 "w_mbytes_per_sec": 0 00:09:09.410 }, 00:09:09.410 "claimed": false, 00:09:09.410 "zoned": false, 00:09:09.410 "supported_io_types": { 00:09:09.410 "read": true, 00:09:09.410 "write": true, 00:09:09.410 "unmap": true, 00:09:09.410 "write_zeroes": true, 00:09:09.410 "flush": true, 00:09:09.410 "reset": true, 00:09:09.410 "compare": false, 00:09:09.410 "compare_and_write": false, 00:09:09.410 "abort": true, 00:09:09.410 "nvme_admin": false, 00:09:09.410 "nvme_io": false 00:09:09.410 }, 00:09:09.410 "memory_domains": [ 00:09:09.410 { 00:09:09.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.410 "dma_device_type": 2 00:09:09.410 } 00:09:09.410 ], 00:09:09.410 "driver_specific": {} 00:09:09.410 } 00:09:09.410 ] 00:09:09.410 12:01:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:09.410 12:01:16 -- common/autotest_common.sh@895 -- # return 0 00:09:09.410 12:01:16 -- bdev/blockdev.sh@548 -- # sleep 2 00:09:09.410 12:01:16 -- bdev/blockdev.sh@547 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:09.668 Running I/O for 5 seconds... 00:09:11.606 12:01:18 -- bdev/blockdev.sh@549 -- # qd_sampling_function_test Malloc_QD 00:09:11.606 12:01:18 -- bdev/blockdev.sh@517 -- # local bdev_name=Malloc_QD 00:09:11.606 12:01:18 -- bdev/blockdev.sh@518 -- # local sampling_period=10 00:09:11.606 12:01:18 -- bdev/blockdev.sh@519 -- # local iostats 00:09:11.606 12:01:18 -- bdev/blockdev.sh@521 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:09:11.606 12:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.606 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:11.606 12:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.606 12:01:18 -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:09:11.606 12:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.606 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:11.606 12:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.606 12:01:18 -- bdev/blockdev.sh@523 -- # iostats='{ 00:09:11.606 "tick_rate": 2300000000, 00:09:11.606 "ticks": 13112383680774760, 00:09:11.606 "bdevs": [ 00:09:11.606 { 00:09:11.606 "name": "Malloc_QD", 00:09:11.606 "bytes_read": 1038135808, 00:09:11.607 "num_read_ops": 253444, 00:09:11.607 "bytes_written": 0, 00:09:11.607 "num_write_ops": 0, 00:09:11.607 "bytes_unmapped": 0, 00:09:11.607 "num_unmap_ops": 0, 00:09:11.607 "bytes_copied": 0, 00:09:11.607 "num_copy_ops": 0, 00:09:11.607 "read_latency_ticks": 2284655538654, 00:09:11.607 "max_read_latency_ticks": 11032928, 00:09:11.607 "min_read_latency_ticks": 189462, 00:09:11.607 "write_latency_ticks": 0, 00:09:11.607 "max_write_latency_ticks": 0, 00:09:11.607 "min_write_latency_ticks": 0, 00:09:11.607 "unmap_latency_ticks": 0, 00:09:11.607 "max_unmap_latency_ticks": 0, 00:09:11.607 "min_unmap_latency_ticks": 0, 00:09:11.607 "copy_latency_ticks": 0, 00:09:11.607 "max_copy_latency_ticks": 0, 00:09:11.607 "min_copy_latency_ticks": 0, 00:09:11.607 "io_error": {}, 00:09:11.607 "queue_depth_polling_period": 10, 00:09:11.607 "queue_depth": 512, 00:09:11.607 "io_time": 40, 00:09:11.607 "weighted_io_time": 20480 00:09:11.607 } 00:09:11.607 ] 00:09:11.607 }' 00:09:11.607 12:01:18 -- bdev/blockdev.sh@525 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:09:11.607 12:01:18 -- bdev/blockdev.sh@525 -- # qd_sampling_period=10 00:09:11.607 12:01:18 -- bdev/blockdev.sh@527 -- # '[' 10 == null ']' 00:09:11.607 12:01:18 -- bdev/blockdev.sh@527 -- # '[' 10 -ne 10 ']' 00:09:11.607 12:01:18 -- bdev/blockdev.sh@551 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:09:11.607 12:01:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:11.607 12:01:18 -- common/autotest_common.sh@10 -- # set +x 00:09:11.607 00:09:11.607 Latency(us) 00:09:11.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.607 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:11.607 Malloc_QD : 2.01 64832.25 253.25 0.00 0.00 3940.47 1018.66 4274.09 00:09:11.607 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:11.607 Malloc_QD : 2.01 65679.02 256.56 0.00 0.00 3889.81 655.36 4815.47 00:09:11.607 =================================================================================================================== 00:09:11.607 Total : 130511.26 509.81 0.00 0.00 3914.96 655.36 4815.47 00:09:11.607 0 00:09:11.607 12:01:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:11.607 12:01:18 -- bdev/blockdev.sh@552 -- # killprocess 1212724 00:09:11.607 12:01:18 -- common/autotest_common.sh@926 -- # '[' -z 1212724 ']' 00:09:11.607 12:01:18 -- common/autotest_common.sh@930 -- # kill -0 1212724 00:09:11.607 12:01:18 -- common/autotest_common.sh@931 -- # uname 00:09:11.607 12:01:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:11.607 12:01:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1212724 00:09:11.607 12:01:18 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:11.607 12:01:18 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:11.607 12:01:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1212724' 00:09:11.607 killing process with pid 1212724 00:09:11.607 12:01:18 -- common/autotest_common.sh@945 -- # kill 1212724 00:09:11.607 Received shutdown signal, test time was about 2.088391 seconds 00:09:11.607 00:09:11.607 Latency(us) 00:09:11.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.607 =================================================================================================================== 00:09:11.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.607 12:01:18 -- common/autotest_common.sh@950 -- # wait 1212724 00:09:11.868 12:01:19 -- bdev/blockdev.sh@553 -- # trap - SIGINT SIGTERM EXIT 00:09:11.868 00:09:11.868 real 0m3.300s 00:09:11.868 user 0m6.402s 00:09:11.868 sys 0m0.360s 00:09:11.868 12:01:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.868 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 ************************************ 00:09:11.868 END TEST bdev_qd_sampling 00:09:11.868 ************************************ 00:09:11.868 12:01:19 -- bdev/blockdev.sh@788 -- # run_test bdev_error error_test_suite '' 00:09:11.868 12:01:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:11.868 12:01:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:11.868 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:11.868 ************************************ 00:09:11.868 START TEST bdev_error 00:09:11.868 ************************************ 00:09:11.868 12:01:19 -- common/autotest_common.sh@1104 -- # error_test_suite '' 00:09:11.868 12:01:19 -- bdev/blockdev.sh@464 -- # DEV_1=Dev_1 00:09:11.868 12:01:19 -- bdev/blockdev.sh@465 -- # DEV_2=Dev_2 00:09:11.868 12:01:19 -- bdev/blockdev.sh@466 -- # ERR_DEV=EE_Dev_1 00:09:11.868 12:01:19 -- bdev/blockdev.sh@470 -- # ERR_PID=1213201 00:09:11.868 12:01:19 -- bdev/blockdev.sh@471 -- # echo 'Process error testing pid: 1213201' 00:09:11.868 Process error testing pid: 1213201 00:09:11.868 12:01:19 -- bdev/blockdev.sh@472 -- # waitforlisten 1213201 00:09:11.868 12:01:19 -- common/autotest_common.sh@819 -- # '[' -z 1213201 ']' 00:09:11.868 12:01:19 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.868 12:01:19 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:11.868 12:01:19 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.868 12:01:19 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:11.868 12:01:19 -- bdev/blockdev.sh@469 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:09:11.868 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:12.126 [2024-07-25 12:01:19.187390] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:12.126 [2024-07-25 12:01:19.187447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1213201 ] 00:09:12.126 [2024-07-25 12:01:19.277028] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.126 [2024-07-25 12:01:19.356106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:12.691 12:01:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:12.691 12:01:19 -- common/autotest_common.sh@852 -- # return 0 00:09:12.691 12:01:19 -- bdev/blockdev.sh@474 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:12.691 12:01:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.691 12:01:19 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 Dev_1 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- bdev/blockdev.sh@475 -- # waitforbdev Dev_1 00:09:12.949 12:01:20 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:09:12.949 12:01:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:12.949 12:01:20 -- common/autotest_common.sh@889 -- # local i 00:09:12.949 12:01:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:12.949 12:01:20 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 [ 00:09:12.949 { 00:09:12.949 "name": "Dev_1", 00:09:12.949 "aliases": [ 00:09:12.949 "b21fa31a-6dd1-4e3b-8e2c-41ffe0ce99c5" 00:09:12.949 ], 00:09:12.949 "product_name": "Malloc disk", 00:09:12.949 "block_size": 512, 00:09:12.949 "num_blocks": 262144, 00:09:12.949 "uuid": "b21fa31a-6dd1-4e3b-8e2c-41ffe0ce99c5", 00:09:12.949 "assigned_rate_limits": { 00:09:12.949 "rw_ios_per_sec": 0, 00:09:12.949 "rw_mbytes_per_sec": 0, 00:09:12.949 "r_mbytes_per_sec": 0, 00:09:12.949 "w_mbytes_per_sec": 0 00:09:12.949 }, 00:09:12.949 "claimed": false, 00:09:12.949 "zoned": false, 00:09:12.949 "supported_io_types": { 00:09:12.949 "read": true, 00:09:12.949 "write": true, 00:09:12.949 "unmap": true, 00:09:12.949 "write_zeroes": true, 00:09:12.949 "flush": true, 00:09:12.949 "reset": true, 00:09:12.949 "compare": false, 00:09:12.949 "compare_and_write": false, 00:09:12.949 "abort": true, 00:09:12.949 "nvme_admin": false, 00:09:12.949 "nvme_io": false 00:09:12.949 }, 00:09:12.949 "memory_domains": [ 00:09:12.949 { 00:09:12.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.949 "dma_device_type": 2 00:09:12.949 } 00:09:12.949 ], 00:09:12.949 "driver_specific": {} 00:09:12.949 } 00:09:12.949 ] 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@895 -- # return 0 00:09:12.949 12:01:20 -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_error_create Dev_1 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 true 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- bdev/blockdev.sh@477 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 Dev_2 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- bdev/blockdev.sh@478 -- # waitforbdev Dev_2 00:09:12.949 12:01:20 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:09:12.949 12:01:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:12.949 12:01:20 -- common/autotest_common.sh@889 -- # local i 00:09:12.949 12:01:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:12.949 12:01:20 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 [ 00:09:12.949 { 00:09:12.949 "name": "Dev_2", 00:09:12.949 "aliases": [ 00:09:12.949 "866f8898-eae6-4b75-bb3c-49b0f96f4236" 00:09:12.949 ], 00:09:12.949 "product_name": "Malloc disk", 00:09:12.949 "block_size": 512, 00:09:12.949 "num_blocks": 262144, 00:09:12.949 "uuid": "866f8898-eae6-4b75-bb3c-49b0f96f4236", 00:09:12.949 "assigned_rate_limits": { 00:09:12.949 "rw_ios_per_sec": 0, 00:09:12.949 "rw_mbytes_per_sec": 0, 00:09:12.949 "r_mbytes_per_sec": 0, 00:09:12.949 "w_mbytes_per_sec": 0 00:09:12.949 }, 00:09:12.949 "claimed": false, 00:09:12.949 "zoned": false, 00:09:12.949 "supported_io_types": { 00:09:12.949 "read": true, 00:09:12.949 "write": true, 00:09:12.949 "unmap": true, 00:09:12.949 "write_zeroes": true, 00:09:12.949 "flush": true, 00:09:12.949 "reset": true, 00:09:12.949 "compare": false, 00:09:12.949 "compare_and_write": false, 00:09:12.949 "abort": true, 00:09:12.949 "nvme_admin": false, 00:09:12.949 "nvme_io": false 00:09:12.949 }, 00:09:12.949 "memory_domains": [ 00:09:12.949 { 00:09:12.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.949 "dma_device_type": 2 00:09:12.949 } 00:09:12.949 ], 00:09:12.949 "driver_specific": {} 00:09:12.949 } 00:09:12.949 ] 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- common/autotest_common.sh@895 -- # return 0 00:09:12.949 12:01:20 -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:12.949 12:01:20 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:12.949 12:01:20 -- common/autotest_common.sh@10 -- # set +x 00:09:12.949 12:01:20 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:12.949 12:01:20 -- bdev/blockdev.sh@482 -- # sleep 1 00:09:12.949 12:01:20 -- bdev/blockdev.sh@481 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:12.949 Running I/O for 5 seconds... 00:09:13.882 12:01:21 -- bdev/blockdev.sh@485 -- # kill -0 1213201 00:09:13.882 12:01:21 -- bdev/blockdev.sh@486 -- # echo 'Process is existed as continue on error is set. Pid: 1213201' 00:09:13.882 Process is existed as continue on error is set. Pid: 1213201 00:09:13.882 12:01:21 -- bdev/blockdev.sh@493 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:09:13.882 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.882 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.882 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.882 12:01:21 -- bdev/blockdev.sh@494 -- # rpc_cmd bdev_malloc_delete Dev_1 00:09:13.882 12:01:21 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:13.882 12:01:21 -- common/autotest_common.sh@10 -- # set +x 00:09:13.882 12:01:21 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:13.882 12:01:21 -- bdev/blockdev.sh@495 -- # sleep 5 00:09:14.139 Timeout while waiting for response: 00:09:14.139 00:09:14.139 00:09:18.321 00:09:18.321 Latency(us) 00:09:18.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:18.321 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:18.321 EE_Dev_1 : 0.93 61988.27 242.14 5.40 0.00 256.14 87.26 441.66 00:09:18.321 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:18.321 Dev_2 : 5.00 131867.88 515.11 0.00 0.00 119.38 40.74 21997.30 00:09:18.321 =================================================================================================================== 00:09:18.321 Total : 193856.15 757.25 5.40 0.00 130.33 40.74 21997.30 00:09:18.885 12:01:26 -- bdev/blockdev.sh@497 -- # killprocess 1213201 00:09:18.885 12:01:26 -- common/autotest_common.sh@926 -- # '[' -z 1213201 ']' 00:09:18.885 12:01:26 -- common/autotest_common.sh@930 -- # kill -0 1213201 00:09:18.885 12:01:26 -- common/autotest_common.sh@931 -- # uname 00:09:18.885 12:01:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:18.885 12:01:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1213201 00:09:19.144 12:01:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:19.144 12:01:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:19.144 12:01:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1213201' 00:09:19.144 killing process with pid 1213201 00:09:19.144 12:01:26 -- common/autotest_common.sh@945 -- # kill 1213201 00:09:19.144 Received shutdown signal, test time was about 5.000000 seconds 00:09:19.144 00:09:19.144 Latency(us) 00:09:19.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:19.144 =================================================================================================================== 00:09:19.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:19.144 12:01:26 -- common/autotest_common.sh@950 -- # wait 1213201 00:09:19.402 12:01:26 -- bdev/blockdev.sh@501 -- # ERR_PID=1214202 00:09:19.402 12:01:26 -- bdev/blockdev.sh@502 -- # echo 'Process error testing pid: 1214202' 00:09:19.402 Process error testing pid: 1214202 00:09:19.402 12:01:26 -- bdev/blockdev.sh@500 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:09:19.402 12:01:26 -- bdev/blockdev.sh@503 -- # waitforlisten 1214202 00:09:19.402 12:01:26 -- common/autotest_common.sh@819 -- # '[' -z 1214202 ']' 00:09:19.402 12:01:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:19.402 12:01:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:19.402 12:01:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:19.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:19.402 12:01:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:19.402 12:01:26 -- common/autotest_common.sh@10 -- # set +x 00:09:19.402 [2024-07-25 12:01:26.548349] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:19.402 [2024-07-25 12:01:26.548412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214202 ] 00:09:19.402 [2024-07-25 12:01:26.636959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.660 [2024-07-25 12:01:26.718092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.226 12:01:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:20.226 12:01:27 -- common/autotest_common.sh@852 -- # return 0 00:09:20.226 12:01:27 -- bdev/blockdev.sh@505 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 Dev_1 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- bdev/blockdev.sh@506 -- # waitforbdev Dev_1 00:09:20.226 12:01:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_1 00:09:20.226 12:01:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:20.226 12:01:27 -- common/autotest_common.sh@889 -- # local i 00:09:20.226 12:01:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:20.226 12:01:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 [ 00:09:20.226 { 00:09:20.226 "name": "Dev_1", 00:09:20.226 "aliases": [ 00:09:20.226 "48e2c542-8363-447c-b6ab-4e637c47b180" 00:09:20.226 ], 00:09:20.226 "product_name": "Malloc disk", 00:09:20.226 "block_size": 512, 00:09:20.226 "num_blocks": 262144, 00:09:20.226 "uuid": "48e2c542-8363-447c-b6ab-4e637c47b180", 00:09:20.226 "assigned_rate_limits": { 00:09:20.226 "rw_ios_per_sec": 0, 00:09:20.226 "rw_mbytes_per_sec": 0, 00:09:20.226 "r_mbytes_per_sec": 0, 00:09:20.226 "w_mbytes_per_sec": 0 00:09:20.226 }, 00:09:20.226 "claimed": false, 00:09:20.226 "zoned": false, 00:09:20.226 "supported_io_types": { 00:09:20.226 "read": true, 00:09:20.226 "write": true, 00:09:20.226 "unmap": true, 00:09:20.226 "write_zeroes": true, 00:09:20.226 "flush": true, 00:09:20.226 "reset": true, 00:09:20.226 "compare": false, 00:09:20.226 "compare_and_write": false, 00:09:20.226 "abort": true, 00:09:20.226 "nvme_admin": false, 00:09:20.226 "nvme_io": false 00:09:20.226 }, 00:09:20.226 "memory_domains": [ 00:09:20.226 { 00:09:20.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.226 "dma_device_type": 2 00:09:20.226 } 00:09:20.226 ], 00:09:20.226 "driver_specific": {} 00:09:20.226 } 00:09:20.226 ] 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@895 -- # return 0 00:09:20.226 12:01:27 -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_error_create Dev_1 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 true 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- bdev/blockdev.sh@508 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 Dev_2 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- bdev/blockdev.sh@509 -- # waitforbdev Dev_2 00:09:20.226 12:01:27 -- common/autotest_common.sh@887 -- # local bdev_name=Dev_2 00:09:20.226 12:01:27 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:20.226 12:01:27 -- common/autotest_common.sh@889 -- # local i 00:09:20.226 12:01:27 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:20.226 12:01:27 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 [ 00:09:20.226 { 00:09:20.226 "name": "Dev_2", 00:09:20.226 "aliases": [ 00:09:20.226 "1e9b1481-75de-456d-98fd-fda6995e3b1e" 00:09:20.226 ], 00:09:20.226 "product_name": "Malloc disk", 00:09:20.226 "block_size": 512, 00:09:20.226 "num_blocks": 262144, 00:09:20.226 "uuid": "1e9b1481-75de-456d-98fd-fda6995e3b1e", 00:09:20.226 "assigned_rate_limits": { 00:09:20.226 "rw_ios_per_sec": 0, 00:09:20.226 "rw_mbytes_per_sec": 0, 00:09:20.226 "r_mbytes_per_sec": 0, 00:09:20.226 "w_mbytes_per_sec": 0 00:09:20.226 }, 00:09:20.226 "claimed": false, 00:09:20.226 "zoned": false, 00:09:20.226 "supported_io_types": { 00:09:20.226 "read": true, 00:09:20.226 "write": true, 00:09:20.226 "unmap": true, 00:09:20.226 "write_zeroes": true, 00:09:20.226 "flush": true, 00:09:20.226 "reset": true, 00:09:20.226 "compare": false, 00:09:20.226 "compare_and_write": false, 00:09:20.226 "abort": true, 00:09:20.226 "nvme_admin": false, 00:09:20.226 "nvme_io": false 00:09:20.226 }, 00:09:20.226 "memory_domains": [ 00:09:20.226 { 00:09:20.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.226 "dma_device_type": 2 00:09:20.226 } 00:09:20.226 ], 00:09:20.226 "driver_specific": {} 00:09:20.226 } 00:09:20.226 ] 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- common/autotest_common.sh@895 -- # return 0 00:09:20.226 12:01:27 -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:09:20.226 12:01:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:20.226 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.226 12:01:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:20.226 12:01:27 -- bdev/blockdev.sh@513 -- # NOT wait 1214202 00:09:20.226 12:01:27 -- common/autotest_common.sh@640 -- # local es=0 00:09:20.226 12:01:27 -- bdev/blockdev.sh@512 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:09:20.226 12:01:27 -- common/autotest_common.sh@642 -- # valid_exec_arg wait 1214202 00:09:20.226 12:01:27 -- common/autotest_common.sh@628 -- # local arg=wait 00:09:20.226 12:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:20.226 12:01:27 -- common/autotest_common.sh@632 -- # type -t wait 00:09:20.226 12:01:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:20.226 12:01:27 -- common/autotest_common.sh@643 -- # wait 1214202 00:09:20.484 Running I/O for 5 seconds... 00:09:20.484 task offset: 174648 on job bdev=EE_Dev_1 fails 00:09:20.484 00:09:20.484 Latency(us) 00:09:20.484 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:20.484 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:20.484 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:09:20.484 EE_Dev_1 : 0.00 46413.50 181.30 10548.52 0.00 230.38 87.26 414.94 00:09:20.484 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:09:20.484 Dev_2 : 0.00 27947.60 109.17 0.00 0.00 426.07 82.37 790.71 00:09:20.484 =================================================================================================================== 00:09:20.484 Total : 74361.10 290.47 10548.52 0.00 336.52 82.37 790.71 00:09:20.484 [2024-07-25 12:01:27.566936] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:20.484 request: 00:09:20.484 { 00:09:20.484 "method": "perform_tests", 00:09:20.484 "req_id": 1 00:09:20.484 } 00:09:20.484 Got JSON-RPC error response 00:09:20.484 response: 00:09:20.484 { 00:09:20.484 "code": -32603, 00:09:20.484 "message": "bdevperf failed with error Operation not permitted" 00:09:20.484 } 00:09:20.742 12:01:27 -- common/autotest_common.sh@643 -- # es=255 00:09:20.742 12:01:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:20.742 12:01:27 -- common/autotest_common.sh@652 -- # es=127 00:09:20.742 12:01:27 -- common/autotest_common.sh@653 -- # case "$es" in 00:09:20.742 12:01:27 -- common/autotest_common.sh@660 -- # es=1 00:09:20.742 12:01:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:20.742 00:09:20.742 real 0m8.721s 00:09:20.742 user 0m8.913s 00:09:20.742 sys 0m0.736s 00:09:20.742 12:01:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:20.742 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 ************************************ 00:09:20.742 END TEST bdev_error 00:09:20.742 ************************************ 00:09:20.742 12:01:27 -- bdev/blockdev.sh@789 -- # run_test bdev_stat stat_test_suite '' 00:09:20.742 12:01:27 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:20.742 12:01:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:20.742 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 ************************************ 00:09:20.742 START TEST bdev_stat 00:09:20.742 ************************************ 00:09:20.742 12:01:27 -- common/autotest_common.sh@1104 -- # stat_test_suite '' 00:09:20.742 12:01:27 -- bdev/blockdev.sh@590 -- # STAT_DEV=Malloc_STAT 00:09:20.742 12:01:27 -- bdev/blockdev.sh@594 -- # STAT_PID=1214404 00:09:20.742 12:01:27 -- bdev/blockdev.sh@595 -- # echo 'Process Bdev IO statistics testing pid: 1214404' 00:09:20.742 Process Bdev IO statistics testing pid: 1214404 00:09:20.742 12:01:27 -- bdev/blockdev.sh@593 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:09:20.742 12:01:27 -- bdev/blockdev.sh@596 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:09:20.742 12:01:27 -- bdev/blockdev.sh@597 -- # waitforlisten 1214404 00:09:20.742 12:01:27 -- common/autotest_common.sh@819 -- # '[' -z 1214404 ']' 00:09:20.742 12:01:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:20.742 12:01:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:20.742 12:01:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:20.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:20.742 12:01:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:20.742 12:01:27 -- common/autotest_common.sh@10 -- # set +x 00:09:20.742 [2024-07-25 12:01:27.964471] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:20.742 [2024-07-25 12:01:27.964521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1214404 ] 00:09:21.000 [2024-07-25 12:01:28.052541] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:21.000 [2024-07-25 12:01:28.142755] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:21.000 [2024-07-25 12:01:28.142758] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.565 12:01:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:21.565 12:01:28 -- common/autotest_common.sh@852 -- # return 0 00:09:21.565 12:01:28 -- bdev/blockdev.sh@599 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:09:21.565 12:01:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.565 12:01:28 -- common/autotest_common.sh@10 -- # set +x 00:09:21.565 Malloc_STAT 00:09:21.565 12:01:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.565 12:01:28 -- bdev/blockdev.sh@600 -- # waitforbdev Malloc_STAT 00:09:21.565 12:01:28 -- common/autotest_common.sh@887 -- # local bdev_name=Malloc_STAT 00:09:21.565 12:01:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:21.565 12:01:28 -- common/autotest_common.sh@889 -- # local i 00:09:21.565 12:01:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:21.565 12:01:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:21.565 12:01:28 -- common/autotest_common.sh@892 -- # rpc_cmd bdev_wait_for_examine 00:09:21.565 12:01:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.565 12:01:28 -- common/autotest_common.sh@10 -- # set +x 00:09:21.565 12:01:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.565 12:01:28 -- common/autotest_common.sh@894 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:09:21.565 12:01:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:21.565 12:01:28 -- common/autotest_common.sh@10 -- # set +x 00:09:21.565 [ 00:09:21.565 { 00:09:21.565 "name": "Malloc_STAT", 00:09:21.565 "aliases": [ 00:09:21.565 "65dfb86d-60af-4f48-935f-feb2f86c9381" 00:09:21.565 ], 00:09:21.565 "product_name": "Malloc disk", 00:09:21.565 "block_size": 512, 00:09:21.565 "num_blocks": 262144, 00:09:21.565 "uuid": "65dfb86d-60af-4f48-935f-feb2f86c9381", 00:09:21.565 "assigned_rate_limits": { 00:09:21.565 "rw_ios_per_sec": 0, 00:09:21.565 "rw_mbytes_per_sec": 0, 00:09:21.565 "r_mbytes_per_sec": 0, 00:09:21.565 "w_mbytes_per_sec": 0 00:09:21.565 }, 00:09:21.565 "claimed": false, 00:09:21.565 "zoned": false, 00:09:21.565 "supported_io_types": { 00:09:21.566 "read": true, 00:09:21.566 "write": true, 00:09:21.566 "unmap": true, 00:09:21.566 "write_zeroes": true, 00:09:21.566 "flush": true, 00:09:21.566 "reset": true, 00:09:21.566 "compare": false, 00:09:21.566 "compare_and_write": false, 00:09:21.566 "abort": true, 00:09:21.566 "nvme_admin": false, 00:09:21.566 "nvme_io": false 00:09:21.566 }, 00:09:21.566 "memory_domains": [ 00:09:21.566 { 00:09:21.566 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.566 "dma_device_type": 2 00:09:21.566 } 00:09:21.566 ], 00:09:21.566 "driver_specific": {} 00:09:21.566 } 00:09:21.566 ] 00:09:21.566 12:01:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:21.566 12:01:28 -- common/autotest_common.sh@895 -- # return 0 00:09:21.566 12:01:28 -- bdev/blockdev.sh@603 -- # sleep 2 00:09:21.566 12:01:28 -- bdev/blockdev.sh@602 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:21.823 Running I/O for 10 seconds... 00:09:23.724 12:01:30 -- bdev/blockdev.sh@604 -- # stat_function_test Malloc_STAT 00:09:23.724 12:01:30 -- bdev/blockdev.sh@557 -- # local bdev_name=Malloc_STAT 00:09:23.724 12:01:30 -- bdev/blockdev.sh@558 -- # local iostats 00:09:23.724 12:01:30 -- bdev/blockdev.sh@559 -- # local io_count1 00:09:23.724 12:01:30 -- bdev/blockdev.sh@560 -- # local io_count2 00:09:23.724 12:01:30 -- bdev/blockdev.sh@561 -- # local iostats_per_channel 00:09:23.724 12:01:30 -- bdev/blockdev.sh@562 -- # local io_count_per_channel1 00:09:23.724 12:01:30 -- bdev/blockdev.sh@563 -- # local io_count_per_channel2 00:09:23.724 12:01:30 -- bdev/blockdev.sh@564 -- # local io_count_per_channel_all=0 00:09:23.724 12:01:30 -- bdev/blockdev.sh@566 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:23.724 12:01:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.724 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:23.724 12:01:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.724 12:01:30 -- bdev/blockdev.sh@566 -- # iostats='{ 00:09:23.724 "tick_rate": 2300000000, 00:09:23.724 "ticks": 13112411504741264, 00:09:23.724 "bdevs": [ 00:09:23.724 { 00:09:23.724 "name": "Malloc_STAT", 00:09:23.724 "bytes_read": 1033941504, 00:09:23.724 "num_read_ops": 252420, 00:09:23.724 "bytes_written": 0, 00:09:23.724 "num_write_ops": 0, 00:09:23.724 "bytes_unmapped": 0, 00:09:23.724 "num_unmap_ops": 0, 00:09:23.724 "bytes_copied": 0, 00:09:23.724 "num_copy_ops": 0, 00:09:23.724 "read_latency_ticks": 2265375196608, 00:09:23.724 "max_read_latency_ticks": 11732852, 00:09:23.724 "min_read_latency_ticks": 216330, 00:09:23.724 "write_latency_ticks": 0, 00:09:23.724 "max_write_latency_ticks": 0, 00:09:23.724 "min_write_latency_ticks": 0, 00:09:23.724 "unmap_latency_ticks": 0, 00:09:23.724 "max_unmap_latency_ticks": 0, 00:09:23.724 "min_unmap_latency_ticks": 0, 00:09:23.724 "copy_latency_ticks": 0, 00:09:23.724 "max_copy_latency_ticks": 0, 00:09:23.724 "min_copy_latency_ticks": 0, 00:09:23.724 "io_error": {} 00:09:23.724 } 00:09:23.724 ] 00:09:23.724 }' 00:09:23.724 12:01:30 -- bdev/blockdev.sh@567 -- # jq -r '.bdevs[0].num_read_ops' 00:09:23.724 12:01:30 -- bdev/blockdev.sh@567 -- # io_count1=252420 00:09:23.724 12:01:30 -- bdev/blockdev.sh@569 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:09:23.724 12:01:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.724 12:01:30 -- common/autotest_common.sh@10 -- # set +x 00:09:23.724 12:01:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.724 12:01:30 -- bdev/blockdev.sh@569 -- # iostats_per_channel='{ 00:09:23.724 "tick_rate": 2300000000, 00:09:23.724 "ticks": 13112411668515626, 00:09:23.724 "name": "Malloc_STAT", 00:09:23.724 "channels": [ 00:09:23.724 { 00:09:23.724 "thread_id": 2, 00:09:23.724 "bytes_read": 530579456, 00:09:23.724 "num_read_ops": 129536, 00:09:23.724 "bytes_written": 0, 00:09:23.724 "num_write_ops": 0, 00:09:23.724 "bytes_unmapped": 0, 00:09:23.724 "num_unmap_ops": 0, 00:09:23.724 "bytes_copied": 0, 00:09:23.724 "num_copy_ops": 0, 00:09:23.724 "read_latency_ticks": 1173432901820, 00:09:23.724 "max_read_latency_ticks": 11732852, 00:09:23.724 "min_read_latency_ticks": 5897180, 00:09:23.724 "write_latency_ticks": 0, 00:09:23.724 "max_write_latency_ticks": 0, 00:09:23.724 "min_write_latency_ticks": 0, 00:09:23.724 "unmap_latency_ticks": 0, 00:09:23.724 "max_unmap_latency_ticks": 0, 00:09:23.724 "min_unmap_latency_ticks": 0, 00:09:23.724 "copy_latency_ticks": 0, 00:09:23.724 "max_copy_latency_ticks": 0, 00:09:23.724 "min_copy_latency_ticks": 0 00:09:23.724 }, 00:09:23.724 { 00:09:23.724 "thread_id": 3, 00:09:23.724 "bytes_read": 541065216, 00:09:23.724 "num_read_ops": 132096, 00:09:23.724 "bytes_written": 0, 00:09:23.724 "num_write_ops": 0, 00:09:23.724 "bytes_unmapped": 0, 00:09:23.724 "num_unmap_ops": 0, 00:09:23.724 "bytes_copied": 0, 00:09:23.724 "num_copy_ops": 0, 00:09:23.724 "read_latency_ticks": 1175066239832, 00:09:23.724 "max_read_latency_ticks": 11041216, 00:09:23.724 "min_read_latency_ticks": 5970562, 00:09:23.724 "write_latency_ticks": 0, 00:09:23.724 "max_write_latency_ticks": 0, 00:09:23.724 "min_write_latency_ticks": 0, 00:09:23.724 "unmap_latency_ticks": 0, 00:09:23.724 "max_unmap_latency_ticks": 0, 00:09:23.724 "min_unmap_latency_ticks": 0, 00:09:23.724 "copy_latency_ticks": 0, 00:09:23.724 "max_copy_latency_ticks": 0, 00:09:23.724 "min_copy_latency_ticks": 0 00:09:23.724 } 00:09:23.724 ] 00:09:23.724 }' 00:09:23.724 12:01:30 -- bdev/blockdev.sh@570 -- # jq -r '.channels[0].num_read_ops' 00:09:23.724 12:01:30 -- bdev/blockdev.sh@570 -- # io_count_per_channel1=129536 00:09:23.724 12:01:30 -- bdev/blockdev.sh@571 -- # io_count_per_channel_all=129536 00:09:23.724 12:01:30 -- bdev/blockdev.sh@572 -- # jq -r '.channels[1].num_read_ops' 00:09:23.724 12:01:31 -- bdev/blockdev.sh@572 -- # io_count_per_channel2=132096 00:09:23.724 12:01:31 -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=261632 00:09:23.724 12:01:31 -- bdev/blockdev.sh@575 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:09:23.724 12:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.724 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:23.724 12:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.724 12:01:31 -- bdev/blockdev.sh@575 -- # iostats='{ 00:09:23.724 "tick_rate": 2300000000, 00:09:23.724 "ticks": 13112411881266964, 00:09:23.724 "bdevs": [ 00:09:23.724 { 00:09:23.724 "name": "Malloc_STAT", 00:09:23.724 "bytes_read": 1119924736, 00:09:23.724 "num_read_ops": 273412, 00:09:23.724 "bytes_written": 0, 00:09:23.724 "num_write_ops": 0, 00:09:23.724 "bytes_unmapped": 0, 00:09:23.724 "num_unmap_ops": 0, 00:09:23.724 "bytes_copied": 0, 00:09:23.724 "num_copy_ops": 0, 00:09:23.724 "read_latency_ticks": 2455740600688, 00:09:23.724 "max_read_latency_ticks": 11732852, 00:09:23.724 "min_read_latency_ticks": 216330, 00:09:23.725 "write_latency_ticks": 0, 00:09:23.725 "max_write_latency_ticks": 0, 00:09:23.725 "min_write_latency_ticks": 0, 00:09:23.725 "unmap_latency_ticks": 0, 00:09:23.725 "max_unmap_latency_ticks": 0, 00:09:23.725 "min_unmap_latency_ticks": 0, 00:09:23.725 "copy_latency_ticks": 0, 00:09:23.725 "max_copy_latency_ticks": 0, 00:09:23.725 "min_copy_latency_ticks": 0, 00:09:23.725 "io_error": {} 00:09:23.725 } 00:09:23.725 ] 00:09:23.725 }' 00:09:23.725 12:01:31 -- bdev/blockdev.sh@576 -- # jq -r '.bdevs[0].num_read_ops' 00:09:23.982 12:01:31 -- bdev/blockdev.sh@576 -- # io_count2=273412 00:09:23.982 12:01:31 -- bdev/blockdev.sh@581 -- # '[' 261632 -lt 252420 ']' 00:09:23.982 12:01:31 -- bdev/blockdev.sh@581 -- # '[' 261632 -gt 273412 ']' 00:09:23.982 12:01:31 -- bdev/blockdev.sh@606 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:09:23.982 12:01:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:23.982 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:23.982 00:09:23.982 Latency(us) 00:09:23.982 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.982 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:09:23.982 Malloc_STAT : 2.16 64799.36 253.12 0.00 0.00 3942.63 1040.03 5128.90 00:09:23.982 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:09:23.982 Malloc_STAT : 2.16 66080.04 258.13 0.00 0.00 3866.55 612.62 4815.47 00:09:23.982 =================================================================================================================== 00:09:23.982 Total : 130879.40 511.25 0.00 0.00 3904.21 612.62 5128.90 00:09:23.982 0 00:09:23.982 12:01:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:23.982 12:01:31 -- bdev/blockdev.sh@607 -- # killprocess 1214404 00:09:23.982 12:01:31 -- common/autotest_common.sh@926 -- # '[' -z 1214404 ']' 00:09:23.982 12:01:31 -- common/autotest_common.sh@930 -- # kill -0 1214404 00:09:23.982 12:01:31 -- common/autotest_common.sh@931 -- # uname 00:09:23.982 12:01:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:23.983 12:01:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1214404 00:09:23.983 12:01:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:23.983 12:01:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:23.983 12:01:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1214404' 00:09:23.983 killing process with pid 1214404 00:09:23.983 12:01:31 -- common/autotest_common.sh@945 -- # kill 1214404 00:09:23.983 Received shutdown signal, test time was about 2.234679 seconds 00:09:23.983 00:09:23.983 Latency(us) 00:09:23.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:23.983 =================================================================================================================== 00:09:23.983 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:23.983 12:01:31 -- common/autotest_common.sh@950 -- # wait 1214404 00:09:24.241 12:01:31 -- bdev/blockdev.sh@608 -- # trap - SIGINT SIGTERM EXIT 00:09:24.241 00:09:24.241 real 0m3.449s 00:09:24.241 user 0m6.811s 00:09:24.241 sys 0m0.422s 00:09:24.241 12:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.241 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.241 ************************************ 00:09:24.241 END TEST bdev_stat 00:09:24.241 ************************************ 00:09:24.241 12:01:31 -- bdev/blockdev.sh@792 -- # [[ bdev == gpt ]] 00:09:24.241 12:01:31 -- bdev/blockdev.sh@796 -- # [[ bdev == crypto_sw ]] 00:09:24.241 12:01:31 -- bdev/blockdev.sh@808 -- # trap - SIGINT SIGTERM EXIT 00:09:24.241 12:01:31 -- bdev/blockdev.sh@809 -- # cleanup 00:09:24.241 12:01:31 -- bdev/blockdev.sh@21 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/aiofile 00:09:24.241 12:01:31 -- bdev/blockdev.sh@22 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev.json 00:09:24.241 12:01:31 -- bdev/blockdev.sh@24 -- # [[ bdev == rbd ]] 00:09:24.241 12:01:31 -- bdev/blockdev.sh@28 -- # [[ bdev == daos ]] 00:09:24.241 12:01:31 -- bdev/blockdev.sh@32 -- # [[ bdev = \g\p\t ]] 00:09:24.241 12:01:31 -- bdev/blockdev.sh@38 -- # [[ bdev == xnvme ]] 00:09:24.241 00:09:24.241 real 1m45.433s 00:09:24.241 user 6m54.484s 00:09:24.241 sys 0m18.192s 00:09:24.241 12:01:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.241 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.241 ************************************ 00:09:24.241 END TEST blockdev_general 00:09:24.241 ************************************ 00:09:24.241 12:01:31 -- spdk/autotest.sh@196 -- # run_test bdev_raid /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:24.241 12:01:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:24.241 12:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.241 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.241 ************************************ 00:09:24.241 START TEST bdev_raid 00:09:24.241 ************************************ 00:09:24.241 12:01:31 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh 00:09:24.499 * Looking for test storage... 00:09:24.499 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@12 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbd_common.sh 00:09:24.499 12:01:31 -- bdev/nbd_common.sh@6 -- # set -e 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@14 -- # rpc_py='/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@714 -- # trap 'on_error_exit;' ERR 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@716 -- # uname -s 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@716 -- # '[' Linux = Linux ']' 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@716 -- # modprobe -n nbd 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@717 -- # has_nbd=true 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@718 -- # modprobe nbd 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@719 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:09:24.499 12:01:31 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:24.499 12:01:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.499 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.499 ************************************ 00:09:24.499 START TEST raid_function_test_raid0 00:09:24.499 ************************************ 00:09:24.499 12:01:31 -- common/autotest_common.sh@1104 -- # raid_function_test raid0 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@81 -- # local raid_level=raid0 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@86 -- # raid_pid=1215024 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 1215024' 00:09:24.499 Process raid pid: 1215024 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:24.499 12:01:31 -- bdev/bdev_raid.sh@88 -- # waitforlisten 1215024 /var/tmp/spdk-raid.sock 00:09:24.499 12:01:31 -- common/autotest_common.sh@819 -- # '[' -z 1215024 ']' 00:09:24.499 12:01:31 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:24.499 12:01:31 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:24.499 12:01:31 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:24.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:24.499 12:01:31 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:24.499 12:01:31 -- common/autotest_common.sh@10 -- # set +x 00:09:24.499 [2024-07-25 12:01:31.652867] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:24.499 [2024-07-25 12:01:31.652926] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.499 [2024-07-25 12:01:31.741469] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.757 [2024-07-25 12:01:31.821595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.757 [2024-07-25 12:01:31.876529] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:24.757 [2024-07-25 12:01:31.876556] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:25.322 12:01:32 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:25.322 12:01:32 -- common/autotest_common.sh@852 -- # return 0 00:09:25.322 12:01:32 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev raid0 00:09:25.322 12:01:32 -- bdev/bdev_raid.sh@67 -- # local raid_level=raid0 00:09:25.322 12:01:32 -- bdev/bdev_raid.sh@68 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:25.322 12:01:32 -- bdev/bdev_raid.sh@70 -- # cat 00:09:25.322 12:01:32 -- bdev/bdev_raid.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:25.322 [2024-07-25 12:01:32.624190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:25.322 [2024-07-25 12:01:32.625160] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:25.322 [2024-07-25 12:01:32.625212] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x2769e90 00:09:25.322 [2024-07-25 12:01:32.625219] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:25.322 [2024-07-25 12:01:32.625363] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2780b10 00:09:25.322 [2024-07-25 12:01:32.625439] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2769e90 00:09:25.322 [2024-07-25 12:01:32.625446] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x2769e90 00:09:25.322 [2024-07-25 12:01:32.625518] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.322 Base_1 00:09:25.322 Base_2 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@77 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@91 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:09:25.580 12:01:32 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@12 -- # local i 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:25.580 12:01:32 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:25.838 [2024-07-25 12:01:32.981163] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2901660 00:09:25.838 /dev/nbd0 00:09:25.838 12:01:33 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:25.838 12:01:33 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:25.838 12:01:33 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:25.838 12:01:33 -- common/autotest_common.sh@857 -- # local i 00:09:25.838 12:01:33 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:25.838 12:01:33 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:25.838 12:01:33 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:25.838 12:01:33 -- common/autotest_common.sh@861 -- # break 00:09:25.838 12:01:33 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:25.838 12:01:33 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:25.838 12:01:33 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:25.838 1+0 records in 00:09:25.838 1+0 records out 00:09:25.838 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000249147 s, 16.4 MB/s 00:09:25.838 12:01:33 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.838 12:01:33 -- common/autotest_common.sh@874 -- # size=4096 00:09:25.838 12:01:33 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:25.838 12:01:33 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:25.838 12:01:33 -- common/autotest_common.sh@877 -- # return 0 00:09:25.838 12:01:33 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:25.838 12:01:33 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:25.838 12:01:33 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:25.839 12:01:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:25.839 12:01:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:26.095 { 00:09:26.095 "nbd_device": "/dev/nbd0", 00:09:26.095 "bdev_name": "raid" 00:09:26.095 } 00:09:26.095 ]' 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:26.095 { 00:09:26.095 "nbd_device": "/dev/nbd0", 00:09:26.095 "bdev_name": "raid" 00:09:26.095 } 00:09:26.095 ]' 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@65 -- # count=1 00:09:26.095 12:01:33 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@98 -- # count=1 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@20 -- # local blksize 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:09:26.095 4096+0 records in 00:09:26.095 4096+0 records out 00:09:26.095 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0302086 s, 69.4 MB/s 00:09:26.095 12:01:33 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:26.352 4096+0 records in 00:09:26.352 4096+0 records out 00:09:26.352 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.189918 s, 11.0 MB/s 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:26.352 128+0 records in 00:09:26.352 128+0 records out 00:09:26.352 65536 bytes (66 kB, 64 KiB) copied, 0.00084751 s, 77.3 MB/s 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:26.352 2035+0 records in 00:09:26.352 2035+0 records out 00:09:26.352 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0105896 s, 98.4 MB/s 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:26.352 12:01:33 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:26.353 456+0 records in 00:09:26.353 456+0 records out 00:09:26.353 233472 bytes (233 kB, 228 KiB) copied, 0.00270512 s, 86.3 MB/s 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@53 -- # return 0 00:09:26.353 12:01:33 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@51 -- # local i 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:26.353 12:01:33 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:26.611 [2024-07-25 12:01:33.791397] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@41 -- # break 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@45 -- # return 0 00:09:26.611 12:01:33 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:26.611 12:01:33 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:26.868 12:01:33 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:26.868 12:01:33 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:26.868 12:01:33 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@65 -- # true 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@65 -- # count=0 00:09:26.868 12:01:34 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:26.868 12:01:34 -- bdev/bdev_raid.sh@106 -- # count=0 00:09:26.868 12:01:34 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:09:26.868 12:01:34 -- bdev/bdev_raid.sh@111 -- # killprocess 1215024 00:09:26.868 12:01:34 -- common/autotest_common.sh@926 -- # '[' -z 1215024 ']' 00:09:26.868 12:01:34 -- common/autotest_common.sh@930 -- # kill -0 1215024 00:09:26.868 12:01:34 -- common/autotest_common.sh@931 -- # uname 00:09:26.868 12:01:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:26.868 12:01:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1215024 00:09:26.868 12:01:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:26.868 12:01:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:26.868 12:01:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1215024' 00:09:26.868 killing process with pid 1215024 00:09:26.868 12:01:34 -- common/autotest_common.sh@945 -- # kill 1215024 00:09:26.868 [2024-07-25 12:01:34.081704] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.868 [2024-07-25 12:01:34.081768] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:26.868 [2024-07-25 12:01:34.081800] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:26.868 [2024-07-25 12:01:34.081808] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2769e90 name raid, state offline 00:09:26.868 12:01:34 -- common/autotest_common.sh@950 -- # wait 1215024 00:09:26.868 [2024-07-25 12:01:34.100025] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@113 -- # return 0 00:09:27.126 00:09:27.126 real 0m2.729s 00:09:27.126 user 0m3.394s 00:09:27.126 sys 0m1.072s 00:09:27.126 12:01:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:27.126 12:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 ************************************ 00:09:27.126 END TEST raid_function_test_raid0 00:09:27.126 ************************************ 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@720 -- # run_test raid_function_test_concat raid_function_test concat 00:09:27.126 12:01:34 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:27.126 12:01:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:27.126 12:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 ************************************ 00:09:27.126 START TEST raid_function_test_concat 00:09:27.126 ************************************ 00:09:27.126 12:01:34 -- common/autotest_common.sh@1104 -- # raid_function_test concat 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@81 -- # local raid_level=concat 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@82 -- # local nbd=/dev/nbd0 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@83 -- # local raid_bdev 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@86 -- # raid_pid=1215472 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@87 -- # echo 'Process raid pid: 1215472' 00:09:27.126 Process raid pid: 1215472 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:27.126 12:01:34 -- bdev/bdev_raid.sh@88 -- # waitforlisten 1215472 /var/tmp/spdk-raid.sock 00:09:27.126 12:01:34 -- common/autotest_common.sh@819 -- # '[' -z 1215472 ']' 00:09:27.126 12:01:34 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:27.126 12:01:34 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.126 12:01:34 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:27.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:27.126 12:01:34 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.126 12:01:34 -- common/autotest_common.sh@10 -- # set +x 00:09:27.126 [2024-07-25 12:01:34.429958] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:27.126 [2024-07-25 12:01:34.430006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.384 [2024-07-25 12:01:34.515450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.384 [2024-07-25 12:01:34.595895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:27.384 [2024-07-25 12:01:34.650375] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.384 [2024-07-25 12:01:34.650399] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.965 12:01:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:27.965 12:01:35 -- common/autotest_common.sh@852 -- # return 0 00:09:27.965 12:01:35 -- bdev/bdev_raid.sh@90 -- # configure_raid_bdev concat 00:09:27.965 12:01:35 -- bdev/bdev_raid.sh@67 -- # local raid_level=concat 00:09:27.965 12:01:35 -- bdev/bdev_raid.sh@68 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:27.965 12:01:35 -- bdev/bdev_raid.sh@70 -- # cat 00:09:27.965 12:01:35 -- bdev/bdev_raid.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock 00:09:28.239 [2024-07-25 12:01:35.406972] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:28.239 [2024-07-25 12:01:35.407952] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:28.239 [2024-07-25 12:01:35.408013] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1cd8e90 00:09:28.239 [2024-07-25 12:01:35.408021] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:28.239 [2024-07-25 12:01:35.408144] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cefb10 00:09:28.239 [2024-07-25 12:01:35.408213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1cd8e90 00:09:28.239 [2024-07-25 12:01:35.408219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x1cd8e90 00:09:28.239 [2024-07-25 12:01:35.408293] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.239 Base_1 00:09:28.239 Base_2 00:09:28.239 12:01:35 -- bdev/bdev_raid.sh@77 -- # rm -rf /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/rpcs.txt 00:09:28.239 12:01:35 -- bdev/bdev_raid.sh@91 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:09:28.239 12:01:35 -- bdev/bdev_raid.sh@91 -- # jq -r '.[0]["name"] | select(.)' 00:09:28.497 12:01:35 -- bdev/bdev_raid.sh@91 -- # raid_bdev=raid 00:09:28.497 12:01:35 -- bdev/bdev_raid.sh@92 -- # '[' raid = '' ']' 00:09:28.497 12:01:35 -- bdev/bdev_raid.sh@97 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid /dev/nbd0 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@12 -- # local i 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid /dev/nbd0 00:09:28.497 [2024-07-25 12:01:35.755882] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1cd99b0 00:09:28.497 /dev/nbd0 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:09:28.497 12:01:35 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:09:28.497 12:01:35 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:09:28.497 12:01:35 -- common/autotest_common.sh@857 -- # local i 00:09:28.497 12:01:35 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:09:28.498 12:01:35 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:09:28.498 12:01:35 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:09:28.498 12:01:35 -- common/autotest_common.sh@861 -- # break 00:09:28.498 12:01:35 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:09:28.498 12:01:35 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:09:28.498 12:01:35 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:09:28.498 1+0 records in 00:09:28.498 1+0 records out 00:09:28.498 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000161638 s, 25.3 MB/s 00:09:28.498 12:01:35 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.498 12:01:35 -- common/autotest_common.sh@874 -- # size=4096 00:09:28.498 12:01:35 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:09:28.498 12:01:35 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:09:28.498 12:01:35 -- common/autotest_common.sh@877 -- # return 0 00:09:28.498 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:09:28.498 12:01:35 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:09:28.498 12:01:35 -- bdev/bdev_raid.sh@98 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:28.498 12:01:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.755 12:01:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:28.755 12:01:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:09:28.755 { 00:09:28.755 "nbd_device": "/dev/nbd0", 00:09:28.755 "bdev_name": "raid" 00:09:28.755 } 00:09:28.755 ]' 00:09:28.755 12:01:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:09:28.755 { 00:09:28.755 "nbd_device": "/dev/nbd0", 00:09:28.755 "bdev_name": "raid" 00:09:28.755 } 00:09:28.755 ]' 00:09:28.755 12:01:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:28.755 12:01:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:09:28.755 12:01:36 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:09:28.755 12:01:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:28.755 12:01:36 -- bdev/nbd_common.sh@65 -- # count=1 00:09:28.755 12:01:36 -- bdev/nbd_common.sh@66 -- # echo 1 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@98 -- # count=1 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@99 -- # '[' 1 -ne 1 ']' 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@103 -- # raid_unmap_data_verify /dev/nbd0 /var/tmp/spdk-raid.sock 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@19 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@20 -- # local blksize 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@21 -- # lsblk -o LOG-SEC /dev/nbd0 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@21 -- # grep -v LOG-SEC 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@21 -- # cut -d ' ' -f 5 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@21 -- # blksize=512 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@22 -- # local rw_blk_num=4096 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@23 -- # local rw_len=2097152 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@24 -- # unmap_blk_offs=('0' '1028' '321') 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_offs 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@25 -- # unmap_blk_nums=('128' '2035' '456') 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@25 -- # local unmap_blk_nums 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@26 -- # local unmap_off 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@27 -- # local unmap_len 00:09:28.755 12:01:36 -- bdev/bdev_raid.sh@30 -- # dd if=/dev/urandom of=/raidrandtest bs=512 count=4096 00:09:29.013 4096+0 records in 00:09:29.013 4096+0 records out 00:09:29.013 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0281387 s, 74.5 MB/s 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@31 -- # dd if=/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:09:29.013 4096+0 records in 00:09:29.013 4096+0 records out 00:09:29.013 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.185835 s, 11.3 MB/s 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@32 -- # blockdev --flushbufs /dev/nbd0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@35 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i = 0 )) 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=65536 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:09:29.013 128+0 records in 00:09:29.013 128+0 records out 00:09:29.013 65536 bytes (66 kB, 64 KiB) copied, 0.00084601 s, 77.5 MB/s 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=526336 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=1041920 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:09:29.013 2035+0 records in 00:09:29.013 2035+0 records out 00:09:29.013 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0118121 s, 88.2 MB/s 00:09:29.013 12:01:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@38 -- # unmap_off=164352 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@39 -- # unmap_len=233472 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@42 -- # dd if=/dev/zero of=/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:09:29.271 456+0 records in 00:09:29.271 456+0 records out 00:09:29.271 233472 bytes (233 kB, 228 KiB) copied, 0.00273536 s, 85.4 MB/s 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@45 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@46 -- # blockdev --flushbufs /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@49 -- # cmp -b -n 2097152 /raidrandtest /dev/nbd0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i++ )) 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@37 -- # (( i < 3 )) 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@53 -- # return 0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@105 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@51 -- # local i 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:09:29.271 [2024-07-25 12:01:36.554064] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@41 -- # break 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@45 -- # return 0 00:09:29.271 12:01:36 -- bdev/bdev_raid.sh@106 -- # nbd_get_count /var/tmp/spdk-raid.sock 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:09:29.271 12:01:36 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_get_disks 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@65 -- # echo '' 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@65 -- # true 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@65 -- # count=0 00:09:29.529 12:01:36 -- bdev/nbd_common.sh@66 -- # echo 0 00:09:29.529 12:01:36 -- bdev/bdev_raid.sh@106 -- # count=0 00:09:29.529 12:01:36 -- bdev/bdev_raid.sh@107 -- # '[' 0 -ne 0 ']' 00:09:29.529 12:01:36 -- bdev/bdev_raid.sh@111 -- # killprocess 1215472 00:09:29.529 12:01:36 -- common/autotest_common.sh@926 -- # '[' -z 1215472 ']' 00:09:29.529 12:01:36 -- common/autotest_common.sh@930 -- # kill -0 1215472 00:09:29.529 12:01:36 -- common/autotest_common.sh@931 -- # uname 00:09:29.529 12:01:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:29.529 12:01:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1215472 00:09:29.787 12:01:36 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:29.787 12:01:36 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:29.787 12:01:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1215472' 00:09:29.787 killing process with pid 1215472 00:09:29.787 12:01:36 -- common/autotest_common.sh@945 -- # kill 1215472 00:09:29.787 [2024-07-25 12:01:36.846449] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:29.787 [2024-07-25 12:01:36.846505] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.787 [2024-07-25 12:01:36.846536] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.787 [2024-07-25 12:01:36.846545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1cd8e90 name raid, state offline 00:09:29.787 12:01:36 -- common/autotest_common.sh@950 -- # wait 1215472 00:09:29.787 [2024-07-25 12:01:36.862430] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:29.787 12:01:37 -- bdev/bdev_raid.sh@113 -- # return 0 00:09:29.787 00:09:29.787 real 0m2.694s 00:09:29.787 user 0m3.407s 00:09:29.787 sys 0m1.017s 00:09:29.787 12:01:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:29.787 12:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:29.787 ************************************ 00:09:29.787 END TEST raid_function_test_concat 00:09:29.787 ************************************ 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@723 -- # run_test raid0_resize_test raid0_resize_test 00:09:30.046 12:01:37 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:30.046 12:01:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:30.046 12:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:30.046 ************************************ 00:09:30.046 START TEST raid0_resize_test 00:09:30.046 ************************************ 00:09:30.046 12:01:37 -- common/autotest_common.sh@1104 -- # raid0_resize_test 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@293 -- # local blksize=512 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@294 -- # local bdev_size_mb=32 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@295 -- # local new_bdev_size_mb=64 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@296 -- # local blkcnt 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@297 -- # local raid_size_mb 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@298 -- # local new_raid_size_mb 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@301 -- # raid_pid=1215912 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@302 -- # echo 'Process raid pid: 1215912' 00:09:30.046 Process raid pid: 1215912 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@303 -- # waitforlisten 1215912 /var/tmp/spdk-raid.sock 00:09:30.046 12:01:37 -- common/autotest_common.sh@819 -- # '[' -z 1215912 ']' 00:09:30.046 12:01:37 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:30.046 12:01:37 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:30.046 12:01:37 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:30.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:30.046 12:01:37 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:30.046 12:01:37 -- bdev/bdev_raid.sh@300 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:30.046 12:01:37 -- common/autotest_common.sh@10 -- # set +x 00:09:30.046 [2024-07-25 12:01:37.172624] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:30.046 [2024-07-25 12:01:37.172676] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.046 [2024-07-25 12:01:37.261360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.046 [2024-07-25 12:01:37.352428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.304 [2024-07-25 12:01:37.412403] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.304 [2024-07-25 12:01:37.412430] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:30.870 12:01:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:30.870 12:01:37 -- common/autotest_common.sh@852 -- # return 0 00:09:30.870 12:01:37 -- bdev/bdev_raid.sh@305 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:09:30.870 Base_1 00:09:30.870 12:01:38 -- bdev/bdev_raid.sh@306 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:09:31.128 Base_2 00:09:31.128 12:01:38 -- bdev/bdev_raid.sh@308 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:09:31.128 [2024-07-25 12:01:38.404959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:09:31.128 [2024-07-25 12:01:38.405950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:09:31.128 [2024-07-25 12:01:38.405993] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x160de20 00:09:31.128 [2024-07-25 12:01:38.406000] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:31.128 [2024-07-25 12:01:38.406147] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x160f240 00:09:31.128 [2024-07-25 12:01:38.406203] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x160de20 00:09:31.128 [2024-07-25 12:01:38.406209] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x160de20 00:09:31.128 [2024-07-25 12:01:38.406291] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:31.128 12:01:38 -- bdev/bdev_raid.sh@311 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:09:31.386 [2024-07-25 12:01:38.557390] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:31.386 [2024-07-25 12:01:38.557412] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:09:31.386 true 00:09:31.386 12:01:38 -- bdev/bdev_raid.sh@314 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:31.386 12:01:38 -- bdev/bdev_raid.sh@314 -- # jq '.[].num_blocks' 00:09:31.643 [2024-07-25 12:01:38.725863] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@314 -- # blkcnt=131072 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@315 -- # raid_size_mb=64 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@316 -- # '[' 64 '!=' 64 ']' 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@322 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:09:31.643 [2024-07-25 12:01:38.882164] bdev_raid.c:2069:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:09:31.643 [2024-07-25 12:01:38.882181] bdev_raid.c:2082:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:09:31.643 [2024-07-25 12:01:38.882199] raid0.c: 402:raid0_resize: *NOTICE*: raid0 'Raid': min blockcount was changed from 262144 to 262144 00:09:31.643 [2024-07-25 12:01:38.882214] bdev_raid.c:1572:raid_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:31.643 true 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@325 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:09:31.643 12:01:38 -- bdev/bdev_raid.sh@325 -- # jq '.[].num_blocks' 00:09:31.901 [2024-07-25 12:01:39.042668] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.901 12:01:39 -- bdev/bdev_raid.sh@325 -- # blkcnt=262144 00:09:31.901 12:01:39 -- bdev/bdev_raid.sh@326 -- # raid_size_mb=128 00:09:31.901 12:01:39 -- bdev/bdev_raid.sh@327 -- # '[' 128 '!=' 128 ']' 00:09:31.901 12:01:39 -- bdev/bdev_raid.sh@332 -- # killprocess 1215912 00:09:31.901 12:01:39 -- common/autotest_common.sh@926 -- # '[' -z 1215912 ']' 00:09:31.901 12:01:39 -- common/autotest_common.sh@930 -- # kill -0 1215912 00:09:31.901 12:01:39 -- common/autotest_common.sh@931 -- # uname 00:09:31.901 12:01:39 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:31.901 12:01:39 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1215912 00:09:31.901 12:01:39 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:31.901 12:01:39 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:31.901 12:01:39 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1215912' 00:09:31.901 killing process with pid 1215912 00:09:31.901 12:01:39 -- common/autotest_common.sh@945 -- # kill 1215912 00:09:31.901 [2024-07-25 12:01:39.104999] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.901 [2024-07-25 12:01:39.105047] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.901 [2024-07-25 12:01:39.105079] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.901 [2024-07-25 12:01:39.105087] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x160de20 name Raid, state offline 00:09:31.901 12:01:39 -- common/autotest_common.sh@950 -- # wait 1215912 00:09:31.901 [2024-07-25 12:01:39.106172] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@334 -- # return 0 00:09:32.158 00:09:32.158 real 0m2.179s 00:09:32.158 user 0m3.159s 00:09:32.158 sys 0m0.500s 00:09:32.158 12:01:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:32.158 12:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:32.158 ************************************ 00:09:32.158 END TEST raid0_resize_test 00:09:32.158 ************************************ 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:09:32.158 12:01:39 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:32.158 12:01:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:32.158 12:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:32.158 ************************************ 00:09:32.158 START TEST raid_state_function_test 00:09:32.158 ************************************ 00:09:32.158 12:01:39 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 false 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:09:32.158 12:01:39 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@226 -- # raid_pid=1216294 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1216294' 00:09:32.159 Process raid pid: 1216294 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:32.159 12:01:39 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1216294 /var/tmp/spdk-raid.sock 00:09:32.159 12:01:39 -- common/autotest_common.sh@819 -- # '[' -z 1216294 ']' 00:09:32.159 12:01:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:32.159 12:01:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:32.159 12:01:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:32.159 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:32.159 12:01:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:32.159 12:01:39 -- common/autotest_common.sh@10 -- # set +x 00:09:32.159 [2024-07-25 12:01:39.407240] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:32.159 [2024-07-25 12:01:39.407296] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.416 [2024-07-25 12:01:39.496669] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.416 [2024-07-25 12:01:39.586170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.416 [2024-07-25 12:01:39.653000] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.416 [2024-07-25 12:01:39.653027] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.979 12:01:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:32.979 12:01:40 -- common/autotest_common.sh@852 -- # return 0 00:09:32.979 12:01:40 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:33.237 [2024-07-25 12:01:40.368317] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.237 [2024-07-25 12:01:40.368352] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.237 [2024-07-25 12:01:40.368360] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.237 [2024-07-25 12:01:40.368367] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:33.237 "name": "Existed_Raid", 00:09:33.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.237 "strip_size_kb": 64, 00:09:33.237 "state": "configuring", 00:09:33.237 "raid_level": "raid0", 00:09:33.237 "superblock": false, 00:09:33.237 "num_base_bdevs": 2, 00:09:33.237 "num_base_bdevs_discovered": 0, 00:09:33.237 "num_base_bdevs_operational": 2, 00:09:33.237 "base_bdevs_list": [ 00:09:33.237 { 00:09:33.237 "name": "BaseBdev1", 00:09:33.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.237 "is_configured": false, 00:09:33.237 "data_offset": 0, 00:09:33.237 "data_size": 0 00:09:33.237 }, 00:09:33.237 { 00:09:33.237 "name": "BaseBdev2", 00:09:33.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.237 "is_configured": false, 00:09:33.237 "data_offset": 0, 00:09:33.237 "data_size": 0 00:09:33.237 } 00:09:33.237 ] 00:09:33.237 }' 00:09:33.237 12:01:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:33.237 12:01:40 -- common/autotest_common.sh@10 -- # set +x 00:09:33.801 12:01:41 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:34.058 [2024-07-25 12:01:41.182350] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.058 [2024-07-25 12:01:41.182373] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x170cd40 name Existed_Raid, state configuring 00:09:34.059 12:01:41 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:34.059 [2024-07-25 12:01:41.350769] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:34.059 [2024-07-25 12:01:41.350789] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:34.059 [2024-07-25 12:01:41.350795] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.059 [2024-07-25 12:01:41.350802] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.316 12:01:41 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:34.316 [2024-07-25 12:01:41.515714] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.316 BaseBdev1 00:09:34.316 12:01:41 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:34.316 12:01:41 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:34.316 12:01:41 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:34.316 12:01:41 -- common/autotest_common.sh@889 -- # local i 00:09:34.316 12:01:41 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:34.316 12:01:41 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:34.316 12:01:41 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:34.573 12:01:41 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:34.573 [ 00:09:34.573 { 00:09:34.573 "name": "BaseBdev1", 00:09:34.573 "aliases": [ 00:09:34.573 "c3b2eb58-1fe7-440c-a46d-db02cbf4a4f2" 00:09:34.573 ], 00:09:34.573 "product_name": "Malloc disk", 00:09:34.573 "block_size": 512, 00:09:34.573 "num_blocks": 65536, 00:09:34.573 "uuid": "c3b2eb58-1fe7-440c-a46d-db02cbf4a4f2", 00:09:34.573 "assigned_rate_limits": { 00:09:34.573 "rw_ios_per_sec": 0, 00:09:34.573 "rw_mbytes_per_sec": 0, 00:09:34.573 "r_mbytes_per_sec": 0, 00:09:34.573 "w_mbytes_per_sec": 0 00:09:34.573 }, 00:09:34.573 "claimed": true, 00:09:34.573 "claim_type": "exclusive_write", 00:09:34.573 "zoned": false, 00:09:34.573 "supported_io_types": { 00:09:34.573 "read": true, 00:09:34.573 "write": true, 00:09:34.573 "unmap": true, 00:09:34.573 "write_zeroes": true, 00:09:34.573 "flush": true, 00:09:34.573 "reset": true, 00:09:34.573 "compare": false, 00:09:34.573 "compare_and_write": false, 00:09:34.573 "abort": true, 00:09:34.573 "nvme_admin": false, 00:09:34.573 "nvme_io": false 00:09:34.573 }, 00:09:34.573 "memory_domains": [ 00:09:34.573 { 00:09:34.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.573 "dma_device_type": 2 00:09:34.573 } 00:09:34.573 ], 00:09:34.573 "driver_specific": {} 00:09:34.573 } 00:09:34.573 ] 00:09:34.573 12:01:41 -- common/autotest_common.sh@895 -- # return 0 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.573 12:01:41 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:34.831 12:01:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:34.831 "name": "Existed_Raid", 00:09:34.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.831 "strip_size_kb": 64, 00:09:34.831 "state": "configuring", 00:09:34.831 "raid_level": "raid0", 00:09:34.831 "superblock": false, 00:09:34.831 "num_base_bdevs": 2, 00:09:34.831 "num_base_bdevs_discovered": 1, 00:09:34.831 "num_base_bdevs_operational": 2, 00:09:34.831 "base_bdevs_list": [ 00:09:34.831 { 00:09:34.831 "name": "BaseBdev1", 00:09:34.831 "uuid": "c3b2eb58-1fe7-440c-a46d-db02cbf4a4f2", 00:09:34.831 "is_configured": true, 00:09:34.831 "data_offset": 0, 00:09:34.831 "data_size": 65536 00:09:34.831 }, 00:09:34.831 { 00:09:34.831 "name": "BaseBdev2", 00:09:34.831 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.831 "is_configured": false, 00:09:34.831 "data_offset": 0, 00:09:34.831 "data_size": 0 00:09:34.831 } 00:09:34.831 ] 00:09:34.831 }' 00:09:34.831 12:01:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:34.831 12:01:42 -- common/autotest_common.sh@10 -- # set +x 00:09:35.396 12:01:42 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:35.396 [2024-07-25 12:01:42.666695] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.396 [2024-07-25 12:01:42.666730] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x170cfc0 name Existed_Raid, state configuring 00:09:35.396 12:01:42 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:35.396 12:01:42 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:35.653 [2024-07-25 12:01:42.835157] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:35.653 [2024-07-25 12:01:42.836240] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:35.653 [2024-07-25 12:01:42.836264] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:35.653 12:01:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:35.654 12:01:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:35.654 12:01:42 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:35.654 12:01:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.911 12:01:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:35.911 "name": "Existed_Raid", 00:09:35.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.911 "strip_size_kb": 64, 00:09:35.911 "state": "configuring", 00:09:35.911 "raid_level": "raid0", 00:09:35.911 "superblock": false, 00:09:35.911 "num_base_bdevs": 2, 00:09:35.911 "num_base_bdevs_discovered": 1, 00:09:35.911 "num_base_bdevs_operational": 2, 00:09:35.911 "base_bdevs_list": [ 00:09:35.911 { 00:09:35.911 "name": "BaseBdev1", 00:09:35.911 "uuid": "c3b2eb58-1fe7-440c-a46d-db02cbf4a4f2", 00:09:35.911 "is_configured": true, 00:09:35.911 "data_offset": 0, 00:09:35.911 "data_size": 65536 00:09:35.911 }, 00:09:35.911 { 00:09:35.911 "name": "BaseBdev2", 00:09:35.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.911 "is_configured": false, 00:09:35.911 "data_offset": 0, 00:09:35.911 "data_size": 0 00:09:35.911 } 00:09:35.911 ] 00:09:35.911 }' 00:09:35.911 12:01:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:35.911 12:01:43 -- common/autotest_common.sh@10 -- # set +x 00:09:36.476 12:01:43 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:36.476 [2024-07-25 12:01:43.656049] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:36.476 [2024-07-25 12:01:43.656082] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x170c630 00:09:36.476 [2024-07-25 12:01:43.656088] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:36.476 [2024-07-25 12:01:43.656214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x170eef0 00:09:36.476 [2024-07-25 12:01:43.656301] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x170c630 00:09:36.476 [2024-07-25 12:01:43.656308] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x170c630 00:09:36.476 [2024-07-25 12:01:43.656421] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.476 BaseBdev2 00:09:36.476 12:01:43 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:36.476 12:01:43 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:36.476 12:01:43 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:36.476 12:01:43 -- common/autotest_common.sh@889 -- # local i 00:09:36.476 12:01:43 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:36.476 12:01:43 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:36.476 12:01:43 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:36.734 12:01:43 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:36.734 [ 00:09:36.734 { 00:09:36.734 "name": "BaseBdev2", 00:09:36.734 "aliases": [ 00:09:36.734 "945e0fcf-884c-4742-9657-65538bb24873" 00:09:36.734 ], 00:09:36.734 "product_name": "Malloc disk", 00:09:36.734 "block_size": 512, 00:09:36.734 "num_blocks": 65536, 00:09:36.734 "uuid": "945e0fcf-884c-4742-9657-65538bb24873", 00:09:36.734 "assigned_rate_limits": { 00:09:36.734 "rw_ios_per_sec": 0, 00:09:36.734 "rw_mbytes_per_sec": 0, 00:09:36.734 "r_mbytes_per_sec": 0, 00:09:36.734 "w_mbytes_per_sec": 0 00:09:36.734 }, 00:09:36.734 "claimed": true, 00:09:36.734 "claim_type": "exclusive_write", 00:09:36.734 "zoned": false, 00:09:36.734 "supported_io_types": { 00:09:36.734 "read": true, 00:09:36.734 "write": true, 00:09:36.734 "unmap": true, 00:09:36.734 "write_zeroes": true, 00:09:36.734 "flush": true, 00:09:36.734 "reset": true, 00:09:36.734 "compare": false, 00:09:36.734 "compare_and_write": false, 00:09:36.734 "abort": true, 00:09:36.734 "nvme_admin": false, 00:09:36.734 "nvme_io": false 00:09:36.734 }, 00:09:36.734 "memory_domains": [ 00:09:36.734 { 00:09:36.734 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:36.734 "dma_device_type": 2 00:09:36.734 } 00:09:36.734 ], 00:09:36.734 "driver_specific": {} 00:09:36.734 } 00:09:36.734 ] 00:09:36.734 12:01:44 -- common/autotest_common.sh@895 -- # return 0 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.734 12:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:36.991 12:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:36.991 "name": "Existed_Raid", 00:09:36.991 "uuid": "5fd829d6-bc1a-4504-9f78-74e3759916b3", 00:09:36.991 "strip_size_kb": 64, 00:09:36.991 "state": "online", 00:09:36.991 "raid_level": "raid0", 00:09:36.991 "superblock": false, 00:09:36.991 "num_base_bdevs": 2, 00:09:36.991 "num_base_bdevs_discovered": 2, 00:09:36.991 "num_base_bdevs_operational": 2, 00:09:36.991 "base_bdevs_list": [ 00:09:36.991 { 00:09:36.991 "name": "BaseBdev1", 00:09:36.991 "uuid": "c3b2eb58-1fe7-440c-a46d-db02cbf4a4f2", 00:09:36.991 "is_configured": true, 00:09:36.991 "data_offset": 0, 00:09:36.991 "data_size": 65536 00:09:36.992 }, 00:09:36.992 { 00:09:36.992 "name": "BaseBdev2", 00:09:36.992 "uuid": "945e0fcf-884c-4742-9657-65538bb24873", 00:09:36.992 "is_configured": true, 00:09:36.992 "data_offset": 0, 00:09:36.992 "data_size": 65536 00:09:36.992 } 00:09:36.992 ] 00:09:36.992 }' 00:09:36.992 12:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:36.992 12:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:37.557 [2024-07-25 12:01:44.791021] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:37.557 [2024-07-25 12:01:44.791043] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:37.557 [2024-07-25 12:01:44.791080] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:37.557 12:01:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.815 12:01:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:37.815 "name": "Existed_Raid", 00:09:37.815 "uuid": "5fd829d6-bc1a-4504-9f78-74e3759916b3", 00:09:37.815 "strip_size_kb": 64, 00:09:37.815 "state": "offline", 00:09:37.815 "raid_level": "raid0", 00:09:37.815 "superblock": false, 00:09:37.815 "num_base_bdevs": 2, 00:09:37.815 "num_base_bdevs_discovered": 1, 00:09:37.815 "num_base_bdevs_operational": 1, 00:09:37.815 "base_bdevs_list": [ 00:09:37.815 { 00:09:37.815 "name": null, 00:09:37.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.815 "is_configured": false, 00:09:37.815 "data_offset": 0, 00:09:37.815 "data_size": 65536 00:09:37.815 }, 00:09:37.815 { 00:09:37.815 "name": "BaseBdev2", 00:09:37.815 "uuid": "945e0fcf-884c-4742-9657-65538bb24873", 00:09:37.815 "is_configured": true, 00:09:37.815 "data_offset": 0, 00:09:37.815 "data_size": 65536 00:09:37.815 } 00:09:37.815 ] 00:09:37.815 }' 00:09:37.815 12:01:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:37.815 12:01:44 -- common/autotest_common.sh@10 -- # set +x 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:38.380 12:01:45 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:38.637 [2024-07-25 12:01:45.786645] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:38.637 [2024-07-25 12:01:45.786688] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x170c630 name Existed_Raid, state offline 00:09:38.637 12:01:45 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:38.637 12:01:45 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:38.637 12:01:45 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:38.637 12:01:45 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:38.895 12:01:45 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:38.895 12:01:45 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:38.895 12:01:45 -- bdev/bdev_raid.sh@287 -- # killprocess 1216294 00:09:38.895 12:01:45 -- common/autotest_common.sh@926 -- # '[' -z 1216294 ']' 00:09:38.895 12:01:45 -- common/autotest_common.sh@930 -- # kill -0 1216294 00:09:38.895 12:01:45 -- common/autotest_common.sh@931 -- # uname 00:09:38.895 12:01:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.895 12:01:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1216294 00:09:38.895 12:01:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:38.895 12:01:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:38.895 12:01:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1216294' 00:09:38.895 killing process with pid 1216294 00:09:38.895 12:01:46 -- common/autotest_common.sh@945 -- # kill 1216294 00:09:38.895 [2024-07-25 12:01:46.029081] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:38.895 12:01:46 -- common/autotest_common.sh@950 -- # wait 1216294 00:09:38.895 [2024-07-25 12:01:46.029889] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:39.153 00:09:39.153 real 0m6.893s 00:09:39.153 user 0m11.883s 00:09:39.153 sys 0m1.395s 00:09:39.153 12:01:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.153 12:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:39.153 ************************************ 00:09:39.153 END TEST raid_state_function_test 00:09:39.153 ************************************ 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:09:39.153 12:01:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:39.153 12:01:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:39.153 12:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:39.153 ************************************ 00:09:39.153 START TEST raid_state_function_test_sb 00:09:39.153 ************************************ 00:09:39.153 12:01:46 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 2 true 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@226 -- # raid_pid=1217397 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1217397' 00:09:39.153 Process raid pid: 1217397 00:09:39.153 12:01:46 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1217397 /var/tmp/spdk-raid.sock 00:09:39.153 12:01:46 -- common/autotest_common.sh@819 -- # '[' -z 1217397 ']' 00:09:39.153 12:01:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:39.153 12:01:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:39.153 12:01:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:39.153 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:39.153 12:01:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:39.154 12:01:46 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:39.154 12:01:46 -- common/autotest_common.sh@10 -- # set +x 00:09:39.154 [2024-07-25 12:01:46.342984] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:39.154 [2024-07-25 12:01:46.343034] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.154 [2024-07-25 12:01:46.432275] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.411 [2024-07-25 12:01:46.524772] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.411 [2024-07-25 12:01:46.582994] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.411 [2024-07-25 12:01:46.583020] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.977 12:01:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:39.977 12:01:47 -- common/autotest_common.sh@852 -- # return 0 00:09:39.977 12:01:47 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:39.977 [2024-07-25 12:01:47.278895] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:39.977 [2024-07-25 12:01:47.278930] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:39.977 [2024-07-25 12:01:47.278937] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.977 [2024-07-25 12:01:47.278945] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.235 12:01:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:40.235 "name": "Existed_Raid", 00:09:40.235 "uuid": "b25032cf-54e9-47e0-abd5-cd36f51dd4f1", 00:09:40.236 "strip_size_kb": 64, 00:09:40.236 "state": "configuring", 00:09:40.236 "raid_level": "raid0", 00:09:40.236 "superblock": true, 00:09:40.236 "num_base_bdevs": 2, 00:09:40.236 "num_base_bdevs_discovered": 0, 00:09:40.236 "num_base_bdevs_operational": 2, 00:09:40.236 "base_bdevs_list": [ 00:09:40.236 { 00:09:40.236 "name": "BaseBdev1", 00:09:40.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.236 "is_configured": false, 00:09:40.236 "data_offset": 0, 00:09:40.236 "data_size": 0 00:09:40.236 }, 00:09:40.236 { 00:09:40.236 "name": "BaseBdev2", 00:09:40.236 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.236 "is_configured": false, 00:09:40.236 "data_offset": 0, 00:09:40.236 "data_size": 0 00:09:40.236 } 00:09:40.236 ] 00:09:40.236 }' 00:09:40.236 12:01:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:40.236 12:01:47 -- common/autotest_common.sh@10 -- # set +x 00:09:40.802 12:01:47 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:40.802 [2024-07-25 12:01:48.040760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:40.802 [2024-07-25 12:01:48.040782] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2445d40 name Existed_Raid, state configuring 00:09:40.802 12:01:48 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:41.061 [2024-07-25 12:01:48.209210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:41.061 [2024-07-25 12:01:48.209233] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:41.061 [2024-07-25 12:01:48.209239] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:41.061 [2024-07-25 12:01:48.209247] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:41.061 12:01:48 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:41.319 [2024-07-25 12:01:48.386309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:41.319 BaseBdev1 00:09:41.319 12:01:48 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:41.319 12:01:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:41.319 12:01:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:41.319 12:01:48 -- common/autotest_common.sh@889 -- # local i 00:09:41.319 12:01:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:41.319 12:01:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:41.319 12:01:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:41.319 12:01:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:41.578 [ 00:09:41.578 { 00:09:41.578 "name": "BaseBdev1", 00:09:41.578 "aliases": [ 00:09:41.578 "83cda687-f8af-4c21-b2e8-8996b23e591f" 00:09:41.578 ], 00:09:41.578 "product_name": "Malloc disk", 00:09:41.578 "block_size": 512, 00:09:41.578 "num_blocks": 65536, 00:09:41.578 "uuid": "83cda687-f8af-4c21-b2e8-8996b23e591f", 00:09:41.578 "assigned_rate_limits": { 00:09:41.578 "rw_ios_per_sec": 0, 00:09:41.578 "rw_mbytes_per_sec": 0, 00:09:41.578 "r_mbytes_per_sec": 0, 00:09:41.578 "w_mbytes_per_sec": 0 00:09:41.578 }, 00:09:41.578 "claimed": true, 00:09:41.578 "claim_type": "exclusive_write", 00:09:41.578 "zoned": false, 00:09:41.578 "supported_io_types": { 00:09:41.578 "read": true, 00:09:41.578 "write": true, 00:09:41.578 "unmap": true, 00:09:41.578 "write_zeroes": true, 00:09:41.578 "flush": true, 00:09:41.578 "reset": true, 00:09:41.578 "compare": false, 00:09:41.578 "compare_and_write": false, 00:09:41.578 "abort": true, 00:09:41.578 "nvme_admin": false, 00:09:41.578 "nvme_io": false 00:09:41.578 }, 00:09:41.578 "memory_domains": [ 00:09:41.578 { 00:09:41.578 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.578 "dma_device_type": 2 00:09:41.578 } 00:09:41.578 ], 00:09:41.578 "driver_specific": {} 00:09:41.578 } 00:09:41.578 ] 00:09:41.578 12:01:48 -- common/autotest_common.sh@895 -- # return 0 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.578 12:01:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.836 12:01:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:41.836 "name": "Existed_Raid", 00:09:41.836 "uuid": "592c2626-c97f-428c-b972-718f57f46702", 00:09:41.836 "strip_size_kb": 64, 00:09:41.836 "state": "configuring", 00:09:41.836 "raid_level": "raid0", 00:09:41.836 "superblock": true, 00:09:41.836 "num_base_bdevs": 2, 00:09:41.836 "num_base_bdevs_discovered": 1, 00:09:41.836 "num_base_bdevs_operational": 2, 00:09:41.836 "base_bdevs_list": [ 00:09:41.836 { 00:09:41.836 "name": "BaseBdev1", 00:09:41.836 "uuid": "83cda687-f8af-4c21-b2e8-8996b23e591f", 00:09:41.836 "is_configured": true, 00:09:41.836 "data_offset": 2048, 00:09:41.836 "data_size": 63488 00:09:41.836 }, 00:09:41.836 { 00:09:41.836 "name": "BaseBdev2", 00:09:41.836 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.836 "is_configured": false, 00:09:41.836 "data_offset": 0, 00:09:41.836 "data_size": 0 00:09:41.836 } 00:09:41.836 ] 00:09:41.836 }' 00:09:41.836 12:01:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:41.836 12:01:48 -- common/autotest_common.sh@10 -- # set +x 00:09:42.443 12:01:49 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:42.443 [2024-07-25 12:01:49.557320] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:42.443 [2024-07-25 12:01:49.557355] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2445fc0 name Existed_Raid, state configuring 00:09:42.443 12:01:49 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:09:42.443 12:01:49 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:42.701 12:01:49 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:42.701 BaseBdev1 00:09:42.701 12:01:49 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:09:42.701 12:01:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:42.701 12:01:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:42.701 12:01:49 -- common/autotest_common.sh@889 -- # local i 00:09:42.701 12:01:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:42.701 12:01:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:42.701 12:01:49 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.960 12:01:50 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:42.960 [ 00:09:42.960 { 00:09:42.960 "name": "BaseBdev1", 00:09:42.960 "aliases": [ 00:09:42.960 "641d60d3-0c0e-4e62-9b60-3a18ade65e5e" 00:09:42.960 ], 00:09:42.960 "product_name": "Malloc disk", 00:09:42.960 "block_size": 512, 00:09:42.960 "num_blocks": 65536, 00:09:42.960 "uuid": "641d60d3-0c0e-4e62-9b60-3a18ade65e5e", 00:09:42.960 "assigned_rate_limits": { 00:09:42.960 "rw_ios_per_sec": 0, 00:09:42.960 "rw_mbytes_per_sec": 0, 00:09:42.960 "r_mbytes_per_sec": 0, 00:09:42.960 "w_mbytes_per_sec": 0 00:09:42.960 }, 00:09:42.960 "claimed": false, 00:09:42.960 "zoned": false, 00:09:42.960 "supported_io_types": { 00:09:42.960 "read": true, 00:09:42.960 "write": true, 00:09:42.960 "unmap": true, 00:09:42.960 "write_zeroes": true, 00:09:42.960 "flush": true, 00:09:42.960 "reset": true, 00:09:42.960 "compare": false, 00:09:42.960 "compare_and_write": false, 00:09:42.960 "abort": true, 00:09:42.960 "nvme_admin": false, 00:09:42.960 "nvme_io": false 00:09:42.960 }, 00:09:42.960 "memory_domains": [ 00:09:42.960 { 00:09:42.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.960 "dma_device_type": 2 00:09:42.960 } 00:09:42.960 ], 00:09:42.960 "driver_specific": {} 00:09:42.960 } 00:09:42.960 ] 00:09:43.218 12:01:50 -- common/autotest_common.sh@895 -- # return 0 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:43.218 [2024-07-25 12:01:50.428265] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:43.218 [2024-07-25 12:01:50.429267] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:43.218 [2024-07-25 12:01:50.429298] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:43.218 12:01:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:43.477 12:01:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:43.477 "name": "Existed_Raid", 00:09:43.477 "uuid": "85cfa1ee-8cf3-4eae-a4b7-470727ca02db", 00:09:43.477 "strip_size_kb": 64, 00:09:43.477 "state": "configuring", 00:09:43.477 "raid_level": "raid0", 00:09:43.477 "superblock": true, 00:09:43.477 "num_base_bdevs": 2, 00:09:43.477 "num_base_bdevs_discovered": 1, 00:09:43.477 "num_base_bdevs_operational": 2, 00:09:43.477 "base_bdevs_list": [ 00:09:43.477 { 00:09:43.477 "name": "BaseBdev1", 00:09:43.477 "uuid": "641d60d3-0c0e-4e62-9b60-3a18ade65e5e", 00:09:43.477 "is_configured": true, 00:09:43.477 "data_offset": 2048, 00:09:43.477 "data_size": 63488 00:09:43.477 }, 00:09:43.477 { 00:09:43.477 "name": "BaseBdev2", 00:09:43.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:43.477 "is_configured": false, 00:09:43.477 "data_offset": 0, 00:09:43.477 "data_size": 0 00:09:43.477 } 00:09:43.477 ] 00:09:43.477 }' 00:09:43.477 12:01:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:43.477 12:01:50 -- common/autotest_common.sh@10 -- # set +x 00:09:44.044 12:01:51 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:44.044 [2024-07-25 12:01:51.277309] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:44.044 [2024-07-25 12:01:51.277421] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x25eb790 00:09:44.044 [2024-07-25 12:01:51.277431] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:44.044 [2024-07-25 12:01:51.277554] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2445180 00:09:44.044 [2024-07-25 12:01:51.277633] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x25eb790 00:09:44.044 [2024-07-25 12:01:51.277640] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x25eb790 00:09:44.044 [2024-07-25 12:01:51.277703] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:44.044 BaseBdev2 00:09:44.044 12:01:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:44.044 12:01:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:44.044 12:01:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:44.044 12:01:51 -- common/autotest_common.sh@889 -- # local i 00:09:44.044 12:01:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:44.044 12:01:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:44.044 12:01:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:44.302 12:01:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:44.561 [ 00:09:44.561 { 00:09:44.561 "name": "BaseBdev2", 00:09:44.561 "aliases": [ 00:09:44.561 "a714fa04-9f32-4a6c-8ea8-e70f75cf4c23" 00:09:44.561 ], 00:09:44.561 "product_name": "Malloc disk", 00:09:44.561 "block_size": 512, 00:09:44.561 "num_blocks": 65536, 00:09:44.561 "uuid": "a714fa04-9f32-4a6c-8ea8-e70f75cf4c23", 00:09:44.561 "assigned_rate_limits": { 00:09:44.561 "rw_ios_per_sec": 0, 00:09:44.561 "rw_mbytes_per_sec": 0, 00:09:44.561 "r_mbytes_per_sec": 0, 00:09:44.561 "w_mbytes_per_sec": 0 00:09:44.561 }, 00:09:44.561 "claimed": true, 00:09:44.561 "claim_type": "exclusive_write", 00:09:44.561 "zoned": false, 00:09:44.561 "supported_io_types": { 00:09:44.561 "read": true, 00:09:44.561 "write": true, 00:09:44.561 "unmap": true, 00:09:44.561 "write_zeroes": true, 00:09:44.561 "flush": true, 00:09:44.561 "reset": true, 00:09:44.561 "compare": false, 00:09:44.561 "compare_and_write": false, 00:09:44.561 "abort": true, 00:09:44.561 "nvme_admin": false, 00:09:44.561 "nvme_io": false 00:09:44.561 }, 00:09:44.561 "memory_domains": [ 00:09:44.561 { 00:09:44.561 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.561 "dma_device_type": 2 00:09:44.561 } 00:09:44.561 ], 00:09:44.561 "driver_specific": {} 00:09:44.561 } 00:09:44.561 ] 00:09:44.561 12:01:51 -- common/autotest_common.sh@895 -- # return 0 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:44.561 "name": "Existed_Raid", 00:09:44.561 "uuid": "85cfa1ee-8cf3-4eae-a4b7-470727ca02db", 00:09:44.561 "strip_size_kb": 64, 00:09:44.561 "state": "online", 00:09:44.561 "raid_level": "raid0", 00:09:44.561 "superblock": true, 00:09:44.561 "num_base_bdevs": 2, 00:09:44.561 "num_base_bdevs_discovered": 2, 00:09:44.561 "num_base_bdevs_operational": 2, 00:09:44.561 "base_bdevs_list": [ 00:09:44.561 { 00:09:44.561 "name": "BaseBdev1", 00:09:44.561 "uuid": "641d60d3-0c0e-4e62-9b60-3a18ade65e5e", 00:09:44.561 "is_configured": true, 00:09:44.561 "data_offset": 2048, 00:09:44.561 "data_size": 63488 00:09:44.561 }, 00:09:44.561 { 00:09:44.561 "name": "BaseBdev2", 00:09:44.561 "uuid": "a714fa04-9f32-4a6c-8ea8-e70f75cf4c23", 00:09:44.561 "is_configured": true, 00:09:44.561 "data_offset": 2048, 00:09:44.561 "data_size": 63488 00:09:44.561 } 00:09:44.561 ] 00:09:44.561 }' 00:09:44.561 12:01:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:44.561 12:01:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.130 12:01:52 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:45.130 [2024-07-25 12:01:52.428328] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:45.130 [2024-07-25 12:01:52.428352] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:45.130 [2024-07-25 12:01:52.428386] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:45.387 12:01:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:45.388 "name": "Existed_Raid", 00:09:45.388 "uuid": "85cfa1ee-8cf3-4eae-a4b7-470727ca02db", 00:09:45.388 "strip_size_kb": 64, 00:09:45.388 "state": "offline", 00:09:45.388 "raid_level": "raid0", 00:09:45.388 "superblock": true, 00:09:45.388 "num_base_bdevs": 2, 00:09:45.388 "num_base_bdevs_discovered": 1, 00:09:45.388 "num_base_bdevs_operational": 1, 00:09:45.388 "base_bdevs_list": [ 00:09:45.388 { 00:09:45.388 "name": null, 00:09:45.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.388 "is_configured": false, 00:09:45.388 "data_offset": 2048, 00:09:45.388 "data_size": 63488 00:09:45.388 }, 00:09:45.388 { 00:09:45.388 "name": "BaseBdev2", 00:09:45.388 "uuid": "a714fa04-9f32-4a6c-8ea8-e70f75cf4c23", 00:09:45.388 "is_configured": true, 00:09:45.388 "data_offset": 2048, 00:09:45.388 "data_size": 63488 00:09:45.388 } 00:09:45.388 ] 00:09:45.388 }' 00:09:45.388 12:01:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:45.388 12:01:52 -- common/autotest_common.sh@10 -- # set +x 00:09:45.954 12:01:53 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:45.954 12:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:45.954 12:01:53 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.954 12:01:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:46.226 [2024-07-25 12:01:53.439747] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:46.226 [2024-07-25 12:01:53.439779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x25eb790 name Existed_Raid, state offline 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.226 12:01:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.487 12:01:53 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:46.487 12:01:53 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:46.487 12:01:53 -- bdev/bdev_raid.sh@287 -- # killprocess 1217397 00:09:46.487 12:01:53 -- common/autotest_common.sh@926 -- # '[' -z 1217397 ']' 00:09:46.487 12:01:53 -- common/autotest_common.sh@930 -- # kill -0 1217397 00:09:46.487 12:01:53 -- common/autotest_common.sh@931 -- # uname 00:09:46.487 12:01:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:46.487 12:01:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1217397 00:09:46.487 12:01:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:46.487 12:01:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:46.487 12:01:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1217397' 00:09:46.487 killing process with pid 1217397 00:09:46.487 12:01:53 -- common/autotest_common.sh@945 -- # kill 1217397 00:09:46.487 [2024-07-25 12:01:53.664411] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:46.487 12:01:53 -- common/autotest_common.sh@950 -- # wait 1217397 00:09:46.487 [2024-07-25 12:01:53.665290] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:46.745 00:09:46.745 real 0m7.603s 00:09:46.745 user 0m13.170s 00:09:46.745 sys 0m1.518s 00:09:46.745 12:01:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:46.745 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 ************************************ 00:09:46.745 END TEST raid_state_function_test_sb 00:09:46.745 ************************************ 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:09:46.745 12:01:53 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:09:46.745 12:01:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:46.745 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 ************************************ 00:09:46.745 START TEST raid_superblock_test 00:09:46.745 ************************************ 00:09:46.745 12:01:53 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 2 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@357 -- # raid_pid=1218600 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1218600 /var/tmp/spdk-raid.sock 00:09:46.745 12:01:53 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:46.745 12:01:53 -- common/autotest_common.sh@819 -- # '[' -z 1218600 ']' 00:09:46.745 12:01:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:46.745 12:01:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:46.745 12:01:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:46.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:46.745 12:01:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:46.745 12:01:53 -- common/autotest_common.sh@10 -- # set +x 00:09:46.745 [2024-07-25 12:01:53.989989] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:46.745 [2024-07-25 12:01:53.990041] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1218600 ] 00:09:47.003 [2024-07-25 12:01:54.077389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.003 [2024-07-25 12:01:54.165012] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.003 [2024-07-25 12:01:54.219944] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.003 [2024-07-25 12:01:54.219974] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:47.568 12:01:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:47.568 12:01:54 -- common/autotest_common.sh@852 -- # return 0 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.568 12:01:54 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:09:47.827 malloc1 00:09:47.828 12:01:54 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:47.828 [2024-07-25 12:01:55.101662] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:47.828 [2024-07-25 12:01:55.101705] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:47.828 [2024-07-25 12:01:55.101721] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184b8d0 00:09:47.828 [2024-07-25 12:01:55.101731] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:47.828 [2024-07-25 12:01:55.102966] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:47.828 [2024-07-25 12:01:55.102990] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:47.828 pt1 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:47.828 12:01:55 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:09:48.086 malloc2 00:09:48.086 12:01:55 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:48.345 [2024-07-25 12:01:55.434475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:48.345 [2024-07-25 12:01:55.434509] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:48.345 [2024-07-25 12:01:55.434524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19f31a0 00:09:48.345 [2024-07-25 12:01:55.434531] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:48.345 [2024-07-25 12:01:55.435594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:48.345 [2024-07-25 12:01:55.435614] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:48.345 pt2 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:09:48.345 [2024-07-25 12:01:55.598926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:48.345 [2024-07-25 12:01:55.599889] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:48.345 [2024-07-25 12:01:55.599999] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x19f3700 00:09:48.345 [2024-07-25 12:01:55.600008] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:48.345 [2024-07-25 12:01:55.600143] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19f2bf0 00:09:48.345 [2024-07-25 12:01:55.600232] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19f3700 00:09:48.345 [2024-07-25 12:01:55.600238] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19f3700 00:09:48.345 [2024-07-25 12:01:55.600308] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.345 12:01:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:48.603 12:01:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:48.603 "name": "raid_bdev1", 00:09:48.603 "uuid": "1b3da875-00fc-4377-b7ac-ba2741773c4d", 00:09:48.603 "strip_size_kb": 64, 00:09:48.603 "state": "online", 00:09:48.604 "raid_level": "raid0", 00:09:48.604 "superblock": true, 00:09:48.604 "num_base_bdevs": 2, 00:09:48.604 "num_base_bdevs_discovered": 2, 00:09:48.604 "num_base_bdevs_operational": 2, 00:09:48.604 "base_bdevs_list": [ 00:09:48.604 { 00:09:48.604 "name": "pt1", 00:09:48.604 "uuid": "250116d2-26a3-5771-95f1-84ab26b4320e", 00:09:48.604 "is_configured": true, 00:09:48.604 "data_offset": 2048, 00:09:48.604 "data_size": 63488 00:09:48.604 }, 00:09:48.604 { 00:09:48.604 "name": "pt2", 00:09:48.604 "uuid": "3bb0473b-3aeb-5dbf-a854-95d1f6b05060", 00:09:48.604 "is_configured": true, 00:09:48.604 "data_offset": 2048, 00:09:48.604 "data_size": 63488 00:09:48.604 } 00:09:48.604 ] 00:09:48.604 }' 00:09:48.604 12:01:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:48.604 12:01:55 -- common/autotest_common.sh@10 -- # set +x 00:09:49.170 12:01:56 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:49.170 12:01:56 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:09:49.170 [2024-07-25 12:01:56.409120] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:49.170 12:01:56 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=1b3da875-00fc-4377-b7ac-ba2741773c4d 00:09:49.170 12:01:56 -- bdev/bdev_raid.sh@380 -- # '[' -z 1b3da875-00fc-4377-b7ac-ba2741773c4d ']' 00:09:49.170 12:01:56 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:49.428 [2024-07-25 12:01:56.585447] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:49.428 [2024-07-25 12:01:56.585463] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:49.428 [2024-07-25 12:01:56.585498] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:49.428 [2024-07-25 12:01:56.585526] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:49.428 [2024-07-25 12:01:56.585533] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f3700 name raid_bdev1, state offline 00:09:49.428 12:01:56 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.428 12:01:56 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:09:49.686 12:01:56 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:09:49.687 12:01:56 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:09:49.687 12:01:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.687 12:01:56 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:09:49.687 12:01:56 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:09:49.687 12:01:56 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:09:49.945 12:01:57 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:09:49.945 12:01:57 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:50.204 12:01:57 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:09:50.204 12:01:57 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:50.204 12:01:57 -- common/autotest_common.sh@640 -- # local es=0 00:09:50.204 12:01:57 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:50.204 12:01:57 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:50.204 12:01:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:50.204 12:01:57 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:50.205 12:01:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:50.205 12:01:57 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:50.205 12:01:57 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:50.205 12:01:57 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:09:50.205 12:01:57 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:09:50.205 12:01:57 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:09:50.205 [2024-07-25 12:01:57.427606] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:50.205 [2024-07-25 12:01:57.428682] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:50.205 [2024-07-25 12:01:57.428721] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:09:50.205 [2024-07-25 12:01:57.428749] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:09:50.205 [2024-07-25 12:01:57.428760] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:50.205 [2024-07-25 12:01:57.428767] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f29a0 name raid_bdev1, state configuring 00:09:50.205 request: 00:09:50.205 { 00:09:50.205 "name": "raid_bdev1", 00:09:50.205 "raid_level": "raid0", 00:09:50.205 "base_bdevs": [ 00:09:50.205 "malloc1", 00:09:50.205 "malloc2" 00:09:50.205 ], 00:09:50.205 "superblock": false, 00:09:50.205 "strip_size_kb": 64, 00:09:50.205 "method": "bdev_raid_create", 00:09:50.205 "req_id": 1 00:09:50.205 } 00:09:50.205 Got JSON-RPC error response 00:09:50.205 response: 00:09:50.205 { 00:09:50.205 "code": -17, 00:09:50.205 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:50.205 } 00:09:50.205 12:01:57 -- common/autotest_common.sh@643 -- # es=1 00:09:50.205 12:01:57 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:50.205 12:01:57 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:50.205 12:01:57 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:50.205 12:01:57 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.205 12:01:57 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:09:50.464 12:01:57 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:09:50.464 12:01:57 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:09:50.464 12:01:57 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:50.464 [2024-07-25 12:01:57.772471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:50.464 [2024-07-25 12:01:57.772501] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:50.464 [2024-07-25 12:01:57.772518] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184bb00 00:09:50.464 [2024-07-25 12:01:57.772526] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:50.722 [2024-07-25 12:01:57.773763] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:50.722 [2024-07-25 12:01:57.773786] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:50.722 [2024-07-25 12:01:57.773831] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:09:50.722 [2024-07-25 12:01:57.773849] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:50.722 pt1 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:50.722 "name": "raid_bdev1", 00:09:50.722 "uuid": "1b3da875-00fc-4377-b7ac-ba2741773c4d", 00:09:50.722 "strip_size_kb": 64, 00:09:50.722 "state": "configuring", 00:09:50.722 "raid_level": "raid0", 00:09:50.722 "superblock": true, 00:09:50.722 "num_base_bdevs": 2, 00:09:50.722 "num_base_bdevs_discovered": 1, 00:09:50.722 "num_base_bdevs_operational": 2, 00:09:50.722 "base_bdevs_list": [ 00:09:50.722 { 00:09:50.722 "name": "pt1", 00:09:50.722 "uuid": "250116d2-26a3-5771-95f1-84ab26b4320e", 00:09:50.722 "is_configured": true, 00:09:50.722 "data_offset": 2048, 00:09:50.722 "data_size": 63488 00:09:50.722 }, 00:09:50.722 { 00:09:50.722 "name": null, 00:09:50.722 "uuid": "3bb0473b-3aeb-5dbf-a854-95d1f6b05060", 00:09:50.722 "is_configured": false, 00:09:50.722 "data_offset": 2048, 00:09:50.722 "data_size": 63488 00:09:50.722 } 00:09:50.722 ] 00:09:50.722 }' 00:09:50.722 12:01:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:50.722 12:01:57 -- common/autotest_common.sh@10 -- # set +x 00:09:51.290 12:01:58 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:09:51.290 12:01:58 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:09:51.290 12:01:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:51.290 12:01:58 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:51.550 [2024-07-25 12:01:58.606616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:51.550 [2024-07-25 12:01:58.606654] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:51.550 [2024-07-25 12:01:58.606669] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19f47c0 00:09:51.550 [2024-07-25 12:01:58.606678] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:51.550 [2024-07-25 12:01:58.606920] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:51.550 [2024-07-25 12:01:58.606931] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:51.550 [2024-07-25 12:01:58.606975] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:09:51.550 [2024-07-25 12:01:58.606988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:51.550 [2024-07-25 12:01:58.607051] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x19f8ac0 00:09:51.550 [2024-07-25 12:01:58.607058] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:51.550 [2024-07-25 12:01:58.607172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18627c0 00:09:51.550 [2024-07-25 12:01:58.607252] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19f8ac0 00:09:51.550 [2024-07-25 12:01:58.607259] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19f8ac0 00:09:51.550 [2024-07-25 12:01:58.607328] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.550 pt2 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:51.550 "name": "raid_bdev1", 00:09:51.550 "uuid": "1b3da875-00fc-4377-b7ac-ba2741773c4d", 00:09:51.550 "strip_size_kb": 64, 00:09:51.550 "state": "online", 00:09:51.550 "raid_level": "raid0", 00:09:51.550 "superblock": true, 00:09:51.550 "num_base_bdevs": 2, 00:09:51.550 "num_base_bdevs_discovered": 2, 00:09:51.550 "num_base_bdevs_operational": 2, 00:09:51.550 "base_bdevs_list": [ 00:09:51.550 { 00:09:51.550 "name": "pt1", 00:09:51.550 "uuid": "250116d2-26a3-5771-95f1-84ab26b4320e", 00:09:51.550 "is_configured": true, 00:09:51.550 "data_offset": 2048, 00:09:51.550 "data_size": 63488 00:09:51.550 }, 00:09:51.550 { 00:09:51.550 "name": "pt2", 00:09:51.550 "uuid": "3bb0473b-3aeb-5dbf-a854-95d1f6b05060", 00:09:51.550 "is_configured": true, 00:09:51.550 "data_offset": 2048, 00:09:51.550 "data_size": 63488 00:09:51.550 } 00:09:51.550 ] 00:09:51.550 }' 00:09:51.550 12:01:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:51.550 12:01:58 -- common/autotest_common.sh@10 -- # set +x 00:09:52.117 12:01:59 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:09:52.117 12:01:59 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:09:52.375 [2024-07-25 12:01:59.444913] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.375 12:01:59 -- bdev/bdev_raid.sh@430 -- # '[' 1b3da875-00fc-4377-b7ac-ba2741773c4d '!=' 1b3da875-00fc-4377-b7ac-ba2741773c4d ']' 00:09:52.375 12:01:59 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:09:52.375 12:01:59 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:52.375 12:01:59 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:52.375 12:01:59 -- bdev/bdev_raid.sh@511 -- # killprocess 1218600 00:09:52.375 12:01:59 -- common/autotest_common.sh@926 -- # '[' -z 1218600 ']' 00:09:52.375 12:01:59 -- common/autotest_common.sh@930 -- # kill -0 1218600 00:09:52.375 12:01:59 -- common/autotest_common.sh@931 -- # uname 00:09:52.375 12:01:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:52.375 12:01:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1218600 00:09:52.375 12:01:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:52.375 12:01:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:52.375 12:01:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1218600' 00:09:52.375 killing process with pid 1218600 00:09:52.375 12:01:59 -- common/autotest_common.sh@945 -- # kill 1218600 00:09:52.375 [2024-07-25 12:01:59.517560] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.375 [2024-07-25 12:01:59.517606] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.375 [2024-07-25 12:01:59.517635] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.375 [2024-07-25 12:01:59.517643] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19f8ac0 name raid_bdev1, state offline 00:09:52.375 12:01:59 -- common/autotest_common.sh@950 -- # wait 1218600 00:09:52.375 [2024-07-25 12:01:59.533182] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@513 -- # return 0 00:09:52.632 00:09:52.632 real 0m5.803s 00:09:52.632 user 0m9.904s 00:09:52.632 sys 0m1.174s 00:09:52.632 12:01:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:52.632 12:01:59 -- common/autotest_common.sh@10 -- # set +x 00:09:52.632 ************************************ 00:09:52.632 END TEST raid_superblock_test 00:09:52.632 ************************************ 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:52.632 12:01:59 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:52.632 12:01:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:52.632 12:01:59 -- common/autotest_common.sh@10 -- # set +x 00:09:52.632 ************************************ 00:09:52.632 START TEST raid_state_function_test 00:09:52.632 ************************************ 00:09:52.632 12:01:59 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 false 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@226 -- # raid_pid=1219534 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1219534' 00:09:52.632 Process raid pid: 1219534 00:09:52.632 12:01:59 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1219534 /var/tmp/spdk-raid.sock 00:09:52.632 12:01:59 -- common/autotest_common.sh@819 -- # '[' -z 1219534 ']' 00:09:52.632 12:01:59 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:52.632 12:01:59 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:52.632 12:01:59 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:52.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:52.632 12:01:59 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:52.632 12:01:59 -- common/autotest_common.sh@10 -- # set +x 00:09:52.632 [2024-07-25 12:01:59.831836] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:52.632 [2024-07-25 12:01:59.831885] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.632 [2024-07-25 12:01:59.920771] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.890 [2024-07-25 12:02:00.016971] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.890 [2024-07-25 12:02:00.073207] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:52.890 [2024-07-25 12:02:00.073232] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.455 12:02:00 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:53.455 12:02:00 -- common/autotest_common.sh@852 -- # return 0 00:09:53.455 12:02:00 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:53.712 [2024-07-25 12:02:00.781856] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:53.712 [2024-07-25 12:02:00.781887] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:53.712 [2024-07-25 12:02:00.781893] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:53.712 [2024-07-25 12:02:00.781901] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.712 12:02:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:53.712 "name": "Existed_Raid", 00:09:53.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.712 "strip_size_kb": 64, 00:09:53.712 "state": "configuring", 00:09:53.712 "raid_level": "concat", 00:09:53.712 "superblock": false, 00:09:53.712 "num_base_bdevs": 2, 00:09:53.712 "num_base_bdevs_discovered": 0, 00:09:53.712 "num_base_bdevs_operational": 2, 00:09:53.712 "base_bdevs_list": [ 00:09:53.712 { 00:09:53.712 "name": "BaseBdev1", 00:09:53.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.712 "is_configured": false, 00:09:53.712 "data_offset": 0, 00:09:53.712 "data_size": 0 00:09:53.712 }, 00:09:53.712 { 00:09:53.712 "name": "BaseBdev2", 00:09:53.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:53.712 "is_configured": false, 00:09:53.713 "data_offset": 0, 00:09:53.713 "data_size": 0 00:09:53.713 } 00:09:53.713 ] 00:09:53.713 }' 00:09:53.713 12:02:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:53.713 12:02:00 -- common/autotest_common.sh@10 -- # set +x 00:09:54.276 12:02:01 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:54.532 [2024-07-25 12:02:01.619933] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.532 [2024-07-25 12:02:01.619954] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14d8d40 name Existed_Raid, state configuring 00:09:54.532 12:02:01 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:54.532 [2024-07-25 12:02:01.788378] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.532 [2024-07-25 12:02:01.788400] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.532 [2024-07-25 12:02:01.788406] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.532 [2024-07-25 12:02:01.788424] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.532 12:02:01 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.790 [2024-07-25 12:02:01.977627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.790 BaseBdev1 00:09:54.790 12:02:01 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:09:54.790 12:02:01 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:09:54.790 12:02:01 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:54.790 12:02:01 -- common/autotest_common.sh@889 -- # local i 00:09:54.790 12:02:01 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:54.790 12:02:01 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:54.790 12:02:01 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:55.049 12:02:02 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.049 [ 00:09:55.049 { 00:09:55.049 "name": "BaseBdev1", 00:09:55.049 "aliases": [ 00:09:55.049 "88a15e41-19a8-4f48-8714-fd7b739a44cb" 00:09:55.049 ], 00:09:55.049 "product_name": "Malloc disk", 00:09:55.049 "block_size": 512, 00:09:55.049 "num_blocks": 65536, 00:09:55.049 "uuid": "88a15e41-19a8-4f48-8714-fd7b739a44cb", 00:09:55.049 "assigned_rate_limits": { 00:09:55.049 "rw_ios_per_sec": 0, 00:09:55.049 "rw_mbytes_per_sec": 0, 00:09:55.049 "r_mbytes_per_sec": 0, 00:09:55.049 "w_mbytes_per_sec": 0 00:09:55.049 }, 00:09:55.049 "claimed": true, 00:09:55.049 "claim_type": "exclusive_write", 00:09:55.049 "zoned": false, 00:09:55.049 "supported_io_types": { 00:09:55.049 "read": true, 00:09:55.049 "write": true, 00:09:55.049 "unmap": true, 00:09:55.049 "write_zeroes": true, 00:09:55.049 "flush": true, 00:09:55.049 "reset": true, 00:09:55.049 "compare": false, 00:09:55.049 "compare_and_write": false, 00:09:55.049 "abort": true, 00:09:55.049 "nvme_admin": false, 00:09:55.049 "nvme_io": false 00:09:55.049 }, 00:09:55.049 "memory_domains": [ 00:09:55.049 { 00:09:55.049 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.049 "dma_device_type": 2 00:09:55.049 } 00:09:55.049 ], 00:09:55.049 "driver_specific": {} 00:09:55.049 } 00:09:55.049 ] 00:09:55.049 12:02:02 -- common/autotest_common.sh@895 -- # return 0 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.049 12:02:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.307 12:02:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:55.307 "name": "Existed_Raid", 00:09:55.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.307 "strip_size_kb": 64, 00:09:55.307 "state": "configuring", 00:09:55.307 "raid_level": "concat", 00:09:55.307 "superblock": false, 00:09:55.307 "num_base_bdevs": 2, 00:09:55.307 "num_base_bdevs_discovered": 1, 00:09:55.307 "num_base_bdevs_operational": 2, 00:09:55.307 "base_bdevs_list": [ 00:09:55.307 { 00:09:55.307 "name": "BaseBdev1", 00:09:55.307 "uuid": "88a15e41-19a8-4f48-8714-fd7b739a44cb", 00:09:55.307 "is_configured": true, 00:09:55.307 "data_offset": 0, 00:09:55.307 "data_size": 65536 00:09:55.307 }, 00:09:55.307 { 00:09:55.307 "name": "BaseBdev2", 00:09:55.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.307 "is_configured": false, 00:09:55.307 "data_offset": 0, 00:09:55.307 "data_size": 0 00:09:55.307 } 00:09:55.307 ] 00:09:55.307 }' 00:09:55.307 12:02:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:55.307 12:02:02 -- common/autotest_common.sh@10 -- # set +x 00:09:55.871 12:02:02 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:55.871 [2024-07-25 12:02:03.100556] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.871 [2024-07-25 12:02:03.100584] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14d8fc0 name Existed_Raid, state configuring 00:09:55.872 12:02:03 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:09:55.872 12:02:03 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:09:56.129 [2024-07-25 12:02:03.280988] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.129 [2024-07-25 12:02:03.282073] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.129 [2024-07-25 12:02:03.282098] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.129 12:02:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.394 12:02:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:56.394 "name": "Existed_Raid", 00:09:56.394 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.394 "strip_size_kb": 64, 00:09:56.394 "state": "configuring", 00:09:56.394 "raid_level": "concat", 00:09:56.394 "superblock": false, 00:09:56.394 "num_base_bdevs": 2, 00:09:56.395 "num_base_bdevs_discovered": 1, 00:09:56.395 "num_base_bdevs_operational": 2, 00:09:56.395 "base_bdevs_list": [ 00:09:56.395 { 00:09:56.395 "name": "BaseBdev1", 00:09:56.395 "uuid": "88a15e41-19a8-4f48-8714-fd7b739a44cb", 00:09:56.395 "is_configured": true, 00:09:56.395 "data_offset": 0, 00:09:56.395 "data_size": 65536 00:09:56.395 }, 00:09:56.395 { 00:09:56.395 "name": "BaseBdev2", 00:09:56.395 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.395 "is_configured": false, 00:09:56.395 "data_offset": 0, 00:09:56.395 "data_size": 0 00:09:56.395 } 00:09:56.395 ] 00:09:56.395 }' 00:09:56.395 12:02:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:56.395 12:02:03 -- common/autotest_common.sh@10 -- # set +x 00:09:56.667 12:02:03 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.927 [2024-07-25 12:02:04.113919] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.927 [2024-07-25 12:02:04.113947] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x14d8630 00:09:56.927 [2024-07-25 12:02:04.113953] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:56.927 [2024-07-25 12:02:04.114090] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x14daef0 00:09:56.927 [2024-07-25 12:02:04.114170] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14d8630 00:09:56.927 [2024-07-25 12:02:04.114177] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x14d8630 00:09:56.927 [2024-07-25 12:02:04.114304] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.927 BaseBdev2 00:09:56.927 12:02:04 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:09:56.927 12:02:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:09:56.927 12:02:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:56.927 12:02:04 -- common/autotest_common.sh@889 -- # local i 00:09:56.927 12:02:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:56.927 12:02:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:56.927 12:02:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:57.186 12:02:04 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:57.186 [ 00:09:57.186 { 00:09:57.186 "name": "BaseBdev2", 00:09:57.186 "aliases": [ 00:09:57.186 "675a4c73-d78a-4047-8acf-792ec9b76b09" 00:09:57.186 ], 00:09:57.186 "product_name": "Malloc disk", 00:09:57.186 "block_size": 512, 00:09:57.186 "num_blocks": 65536, 00:09:57.186 "uuid": "675a4c73-d78a-4047-8acf-792ec9b76b09", 00:09:57.186 "assigned_rate_limits": { 00:09:57.186 "rw_ios_per_sec": 0, 00:09:57.186 "rw_mbytes_per_sec": 0, 00:09:57.186 "r_mbytes_per_sec": 0, 00:09:57.186 "w_mbytes_per_sec": 0 00:09:57.186 }, 00:09:57.186 "claimed": true, 00:09:57.186 "claim_type": "exclusive_write", 00:09:57.186 "zoned": false, 00:09:57.186 "supported_io_types": { 00:09:57.186 "read": true, 00:09:57.186 "write": true, 00:09:57.186 "unmap": true, 00:09:57.186 "write_zeroes": true, 00:09:57.186 "flush": true, 00:09:57.186 "reset": true, 00:09:57.186 "compare": false, 00:09:57.186 "compare_and_write": false, 00:09:57.186 "abort": true, 00:09:57.186 "nvme_admin": false, 00:09:57.186 "nvme_io": false 00:09:57.186 }, 00:09:57.186 "memory_domains": [ 00:09:57.186 { 00:09:57.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.186 "dma_device_type": 2 00:09:57.186 } 00:09:57.186 ], 00:09:57.186 "driver_specific": {} 00:09:57.186 } 00:09:57.186 ] 00:09:57.186 12:02:04 -- common/autotest_common.sh@895 -- # return 0 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.186 12:02:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.444 12:02:04 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:57.444 "name": "Existed_Raid", 00:09:57.444 "uuid": "bc657855-35e4-4187-8ac1-fd1f22d35f27", 00:09:57.444 "strip_size_kb": 64, 00:09:57.444 "state": "online", 00:09:57.444 "raid_level": "concat", 00:09:57.444 "superblock": false, 00:09:57.444 "num_base_bdevs": 2, 00:09:57.444 "num_base_bdevs_discovered": 2, 00:09:57.444 "num_base_bdevs_operational": 2, 00:09:57.444 "base_bdevs_list": [ 00:09:57.444 { 00:09:57.444 "name": "BaseBdev1", 00:09:57.444 "uuid": "88a15e41-19a8-4f48-8714-fd7b739a44cb", 00:09:57.444 "is_configured": true, 00:09:57.444 "data_offset": 0, 00:09:57.444 "data_size": 65536 00:09:57.444 }, 00:09:57.444 { 00:09:57.444 "name": "BaseBdev2", 00:09:57.444 "uuid": "675a4c73-d78a-4047-8acf-792ec9b76b09", 00:09:57.444 "is_configured": true, 00:09:57.444 "data_offset": 0, 00:09:57.444 "data_size": 65536 00:09:57.444 } 00:09:57.444 ] 00:09:57.444 }' 00:09:57.444 12:02:04 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:57.444 12:02:04 -- common/autotest_common.sh@10 -- # set +x 00:09:58.012 12:02:05 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:58.012 [2024-07-25 12:02:05.301018] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:58.012 [2024-07-25 12:02:05.301038] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:58.012 [2024-07-25 12:02:05.301065] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@197 -- # return 1 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:09:58.271 "name": "Existed_Raid", 00:09:58.271 "uuid": "bc657855-35e4-4187-8ac1-fd1f22d35f27", 00:09:58.271 "strip_size_kb": 64, 00:09:58.271 "state": "offline", 00:09:58.271 "raid_level": "concat", 00:09:58.271 "superblock": false, 00:09:58.271 "num_base_bdevs": 2, 00:09:58.271 "num_base_bdevs_discovered": 1, 00:09:58.271 "num_base_bdevs_operational": 1, 00:09:58.271 "base_bdevs_list": [ 00:09:58.271 { 00:09:58.271 "name": null, 00:09:58.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.271 "is_configured": false, 00:09:58.271 "data_offset": 0, 00:09:58.271 "data_size": 65536 00:09:58.271 }, 00:09:58.271 { 00:09:58.271 "name": "BaseBdev2", 00:09:58.271 "uuid": "675a4c73-d78a-4047-8acf-792ec9b76b09", 00:09:58.271 "is_configured": true, 00:09:58.271 "data_offset": 0, 00:09:58.271 "data_size": 65536 00:09:58.271 } 00:09:58.271 ] 00:09:58.271 }' 00:09:58.271 12:02:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:09:58.271 12:02:05 -- common/autotest_common.sh@10 -- # set +x 00:09:58.840 12:02:05 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:09:58.840 12:02:05 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:58.840 12:02:05 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:58.840 12:02:05 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:59.099 [2024-07-25 12:02:06.312417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.099 [2024-07-25 12:02:06.312458] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14d8630 name Existed_Raid, state offline 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:59.099 12:02:06 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:09:59.358 12:02:06 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:09:59.358 12:02:06 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:09:59.358 12:02:06 -- bdev/bdev_raid.sh@287 -- # killprocess 1219534 00:09:59.358 12:02:06 -- common/autotest_common.sh@926 -- # '[' -z 1219534 ']' 00:09:59.358 12:02:06 -- common/autotest_common.sh@930 -- # kill -0 1219534 00:09:59.358 12:02:06 -- common/autotest_common.sh@931 -- # uname 00:09:59.358 12:02:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:59.358 12:02:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1219534 00:09:59.358 12:02:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:59.358 12:02:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:59.358 12:02:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1219534' 00:09:59.358 killing process with pid 1219534 00:09:59.358 12:02:06 -- common/autotest_common.sh@945 -- # kill 1219534 00:09:59.358 [2024-07-25 12:02:06.558809] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.358 12:02:06 -- common/autotest_common.sh@950 -- # wait 1219534 00:09:59.358 [2024-07-25 12:02:06.559709] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.617 12:02:06 -- bdev/bdev_raid.sh@289 -- # return 0 00:09:59.617 00:09:59.617 real 0m6.995s 00:09:59.617 user 0m12.081s 00:09:59.617 sys 0m1.420s 00:09:59.617 12:02:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.617 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.617 ************************************ 00:09:59.617 END TEST raid_state_function_test 00:09:59.617 ************************************ 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:59.618 12:02:06 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:09:59.618 12:02:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:59.618 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.618 ************************************ 00:09:59.618 START TEST raid_state_function_test_sb 00:09:59.618 ************************************ 00:09:59.618 12:02:06 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 2 true 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@226 -- # raid_pid=1220624 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1220624' 00:09:59.618 Process raid pid: 1220624 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:59.618 12:02:06 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1220624 /var/tmp/spdk-raid.sock 00:09:59.618 12:02:06 -- common/autotest_common.sh@819 -- # '[' -z 1220624 ']' 00:09:59.618 12:02:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:59.618 12:02:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:59.618 12:02:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:59.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:59.618 12:02:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:59.618 12:02:06 -- common/autotest_common.sh@10 -- # set +x 00:09:59.618 [2024-07-25 12:02:06.884302] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:09:59.618 [2024-07-25 12:02:06.884350] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.877 [2024-07-25 12:02:06.973399] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:59.877 [2024-07-25 12:02:07.061671] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.877 [2024-07-25 12:02:07.121894] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:59.877 [2024-07-25 12:02:07.121921] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.444 12:02:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:00.444 12:02:07 -- common/autotest_common.sh@852 -- # return 0 00:10:00.444 12:02:07 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:00.702 [2024-07-25 12:02:07.833683] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:00.702 [2024-07-25 12:02:07.833715] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:00.702 [2024-07-25 12:02:07.833722] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:00.702 [2024-07-25 12:02:07.833729] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:00.703 12:02:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.962 12:02:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:00.962 "name": "Existed_Raid", 00:10:00.962 "uuid": "84df7b6e-c640-4ee1-9fa5-03aadf4e7b69", 00:10:00.962 "strip_size_kb": 64, 00:10:00.962 "state": "configuring", 00:10:00.962 "raid_level": "concat", 00:10:00.962 "superblock": true, 00:10:00.962 "num_base_bdevs": 2, 00:10:00.962 "num_base_bdevs_discovered": 0, 00:10:00.962 "num_base_bdevs_operational": 2, 00:10:00.962 "base_bdevs_list": [ 00:10:00.962 { 00:10:00.962 "name": "BaseBdev1", 00:10:00.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.962 "is_configured": false, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 0 00:10:00.962 }, 00:10:00.962 { 00:10:00.962 "name": "BaseBdev2", 00:10:00.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.962 "is_configured": false, 00:10:00.962 "data_offset": 0, 00:10:00.962 "data_size": 0 00:10:00.962 } 00:10:00.962 ] 00:10:00.962 }' 00:10:00.962 12:02:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:00.962 12:02:08 -- common/autotest_common.sh@10 -- # set +x 00:10:01.221 12:02:08 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:01.479 [2024-07-25 12:02:08.635656] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:01.479 [2024-07-25 12:02:08.635676] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e75d40 name Existed_Raid, state configuring 00:10:01.479 12:02:08 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:01.737 [2024-07-25 12:02:08.804144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:01.737 [2024-07-25 12:02:08.804169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:01.737 [2024-07-25 12:02:08.804175] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:01.737 [2024-07-25 12:02:08.804184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:01.737 12:02:08 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:01.737 [2024-07-25 12:02:08.989227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:01.737 BaseBdev1 00:10:01.737 12:02:09 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:01.737 12:02:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:01.737 12:02:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:01.737 12:02:09 -- common/autotest_common.sh@889 -- # local i 00:10:01.737 12:02:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:01.738 12:02:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:01.738 12:02:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:01.996 12:02:09 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:02.255 [ 00:10:02.255 { 00:10:02.255 "name": "BaseBdev1", 00:10:02.255 "aliases": [ 00:10:02.255 "981cc922-8b39-40dd-9ee3-3d0460bbce36" 00:10:02.255 ], 00:10:02.255 "product_name": "Malloc disk", 00:10:02.255 "block_size": 512, 00:10:02.256 "num_blocks": 65536, 00:10:02.256 "uuid": "981cc922-8b39-40dd-9ee3-3d0460bbce36", 00:10:02.256 "assigned_rate_limits": { 00:10:02.256 "rw_ios_per_sec": 0, 00:10:02.256 "rw_mbytes_per_sec": 0, 00:10:02.256 "r_mbytes_per_sec": 0, 00:10:02.256 "w_mbytes_per_sec": 0 00:10:02.256 }, 00:10:02.256 "claimed": true, 00:10:02.256 "claim_type": "exclusive_write", 00:10:02.256 "zoned": false, 00:10:02.256 "supported_io_types": { 00:10:02.256 "read": true, 00:10:02.256 "write": true, 00:10:02.256 "unmap": true, 00:10:02.256 "write_zeroes": true, 00:10:02.256 "flush": true, 00:10:02.256 "reset": true, 00:10:02.256 "compare": false, 00:10:02.256 "compare_and_write": false, 00:10:02.256 "abort": true, 00:10:02.256 "nvme_admin": false, 00:10:02.256 "nvme_io": false 00:10:02.256 }, 00:10:02.256 "memory_domains": [ 00:10:02.256 { 00:10:02.256 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.256 "dma_device_type": 2 00:10:02.256 } 00:10:02.256 ], 00:10:02.256 "driver_specific": {} 00:10:02.256 } 00:10:02.256 ] 00:10:02.256 12:02:09 -- common/autotest_common.sh@895 -- # return 0 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:02.256 "name": "Existed_Raid", 00:10:02.256 "uuid": "406c5544-ac9c-4488-99a1-dc079ef2868d", 00:10:02.256 "strip_size_kb": 64, 00:10:02.256 "state": "configuring", 00:10:02.256 "raid_level": "concat", 00:10:02.256 "superblock": true, 00:10:02.256 "num_base_bdevs": 2, 00:10:02.256 "num_base_bdevs_discovered": 1, 00:10:02.256 "num_base_bdevs_operational": 2, 00:10:02.256 "base_bdevs_list": [ 00:10:02.256 { 00:10:02.256 "name": "BaseBdev1", 00:10:02.256 "uuid": "981cc922-8b39-40dd-9ee3-3d0460bbce36", 00:10:02.256 "is_configured": true, 00:10:02.256 "data_offset": 2048, 00:10:02.256 "data_size": 63488 00:10:02.256 }, 00:10:02.256 { 00:10:02.256 "name": "BaseBdev2", 00:10:02.256 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.256 "is_configured": false, 00:10:02.256 "data_offset": 0, 00:10:02.256 "data_size": 0 00:10:02.256 } 00:10:02.256 ] 00:10:02.256 }' 00:10:02.256 12:02:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:02.256 12:02:09 -- common/autotest_common.sh@10 -- # set +x 00:10:02.823 12:02:09 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:03.081 [2024-07-25 12:02:10.152242] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.081 [2024-07-25 12:02:10.152282] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1e75fc0 name Existed_Raid, state configuring 00:10:03.081 12:02:10 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:03.081 12:02:10 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:03.081 12:02:10 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:03.339 BaseBdev1 00:10:03.339 12:02:10 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:03.339 12:02:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:03.339 12:02:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:03.339 12:02:10 -- common/autotest_common.sh@889 -- # local i 00:10:03.339 12:02:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:03.339 12:02:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:03.339 12:02:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:03.597 12:02:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:03.597 [ 00:10:03.597 { 00:10:03.597 "name": "BaseBdev1", 00:10:03.597 "aliases": [ 00:10:03.597 "cd763f61-40a6-42cd-83bd-7351174c1cd1" 00:10:03.597 ], 00:10:03.597 "product_name": "Malloc disk", 00:10:03.597 "block_size": 512, 00:10:03.597 "num_blocks": 65536, 00:10:03.597 "uuid": "cd763f61-40a6-42cd-83bd-7351174c1cd1", 00:10:03.597 "assigned_rate_limits": { 00:10:03.597 "rw_ios_per_sec": 0, 00:10:03.597 "rw_mbytes_per_sec": 0, 00:10:03.597 "r_mbytes_per_sec": 0, 00:10:03.597 "w_mbytes_per_sec": 0 00:10:03.597 }, 00:10:03.597 "claimed": false, 00:10:03.597 "zoned": false, 00:10:03.597 "supported_io_types": { 00:10:03.597 "read": true, 00:10:03.597 "write": true, 00:10:03.597 "unmap": true, 00:10:03.597 "write_zeroes": true, 00:10:03.597 "flush": true, 00:10:03.597 "reset": true, 00:10:03.597 "compare": false, 00:10:03.597 "compare_and_write": false, 00:10:03.598 "abort": true, 00:10:03.598 "nvme_admin": false, 00:10:03.598 "nvme_io": false 00:10:03.598 }, 00:10:03.598 "memory_domains": [ 00:10:03.598 { 00:10:03.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.598 "dma_device_type": 2 00:10:03.598 } 00:10:03.598 ], 00:10:03.598 "driver_specific": {} 00:10:03.598 } 00:10:03.598 ] 00:10:03.598 12:02:10 -- common/autotest_common.sh@895 -- # return 0 00:10:03.598 12:02:10 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:03.856 [2024-07-25 12:02:10.999084] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:03.856 [2024-07-25 12:02:11.000169] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:03.856 [2024-07-25 12:02:11.000195] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:03.856 12:02:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:04.114 12:02:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:04.114 "name": "Existed_Raid", 00:10:04.114 "uuid": "7f0b7dba-7827-4ea0-a29b-e5099d695814", 00:10:04.114 "strip_size_kb": 64, 00:10:04.114 "state": "configuring", 00:10:04.114 "raid_level": "concat", 00:10:04.114 "superblock": true, 00:10:04.114 "num_base_bdevs": 2, 00:10:04.114 "num_base_bdevs_discovered": 1, 00:10:04.114 "num_base_bdevs_operational": 2, 00:10:04.114 "base_bdevs_list": [ 00:10:04.114 { 00:10:04.114 "name": "BaseBdev1", 00:10:04.114 "uuid": "cd763f61-40a6-42cd-83bd-7351174c1cd1", 00:10:04.114 "is_configured": true, 00:10:04.114 "data_offset": 2048, 00:10:04.114 "data_size": 63488 00:10:04.114 }, 00:10:04.114 { 00:10:04.114 "name": "BaseBdev2", 00:10:04.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:04.114 "is_configured": false, 00:10:04.114 "data_offset": 0, 00:10:04.114 "data_size": 0 00:10:04.114 } 00:10:04.114 ] 00:10:04.114 }' 00:10:04.114 12:02:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:04.114 12:02:11 -- common/autotest_common.sh@10 -- # set +x 00:10:04.372 12:02:11 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:04.631 [2024-07-25 12:02:11.835931] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:04.631 [2024-07-25 12:02:11.836050] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x201b790 00:10:04.631 [2024-07-25 12:02:11.836059] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:04.631 [2024-07-25 12:02:11.836180] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1e75180 00:10:04.631 [2024-07-25 12:02:11.836255] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x201b790 00:10:04.631 [2024-07-25 12:02:11.836261] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x201b790 00:10:04.631 [2024-07-25 12:02:11.836328] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:04.631 BaseBdev2 00:10:04.631 12:02:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:04.631 12:02:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:04.631 12:02:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:04.631 12:02:11 -- common/autotest_common.sh@889 -- # local i 00:10:04.631 12:02:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:04.631 12:02:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:04.631 12:02:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:04.889 12:02:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:04.889 [ 00:10:04.889 { 00:10:04.889 "name": "BaseBdev2", 00:10:04.889 "aliases": [ 00:10:04.889 "0459e3c3-436f-4b8d-8673-dcb25c203fe1" 00:10:04.889 ], 00:10:04.889 "product_name": "Malloc disk", 00:10:04.889 "block_size": 512, 00:10:04.889 "num_blocks": 65536, 00:10:04.889 "uuid": "0459e3c3-436f-4b8d-8673-dcb25c203fe1", 00:10:04.889 "assigned_rate_limits": { 00:10:04.889 "rw_ios_per_sec": 0, 00:10:04.889 "rw_mbytes_per_sec": 0, 00:10:04.889 "r_mbytes_per_sec": 0, 00:10:04.889 "w_mbytes_per_sec": 0 00:10:04.889 }, 00:10:04.889 "claimed": true, 00:10:04.889 "claim_type": "exclusive_write", 00:10:04.889 "zoned": false, 00:10:04.889 "supported_io_types": { 00:10:04.889 "read": true, 00:10:04.889 "write": true, 00:10:04.889 "unmap": true, 00:10:04.889 "write_zeroes": true, 00:10:04.889 "flush": true, 00:10:04.889 "reset": true, 00:10:04.889 "compare": false, 00:10:04.889 "compare_and_write": false, 00:10:04.889 "abort": true, 00:10:04.889 "nvme_admin": false, 00:10:04.889 "nvme_io": false 00:10:04.889 }, 00:10:04.889 "memory_domains": [ 00:10:04.889 { 00:10:04.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.889 "dma_device_type": 2 00:10:04.889 } 00:10:04.889 ], 00:10:04.889 "driver_specific": {} 00:10:04.889 } 00:10:04.889 ] 00:10:04.889 12:02:12 -- common/autotest_common.sh@895 -- # return 0 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:04.889 12:02:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:04.890 12:02:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:04.890 12:02:12 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.890 12:02:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.148 12:02:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:05.148 "name": "Existed_Raid", 00:10:05.148 "uuid": "7f0b7dba-7827-4ea0-a29b-e5099d695814", 00:10:05.148 "strip_size_kb": 64, 00:10:05.148 "state": "online", 00:10:05.148 "raid_level": "concat", 00:10:05.148 "superblock": true, 00:10:05.148 "num_base_bdevs": 2, 00:10:05.148 "num_base_bdevs_discovered": 2, 00:10:05.148 "num_base_bdevs_operational": 2, 00:10:05.148 "base_bdevs_list": [ 00:10:05.148 { 00:10:05.148 "name": "BaseBdev1", 00:10:05.148 "uuid": "cd763f61-40a6-42cd-83bd-7351174c1cd1", 00:10:05.148 "is_configured": true, 00:10:05.148 "data_offset": 2048, 00:10:05.148 "data_size": 63488 00:10:05.148 }, 00:10:05.148 { 00:10:05.148 "name": "BaseBdev2", 00:10:05.148 "uuid": "0459e3c3-436f-4b8d-8673-dcb25c203fe1", 00:10:05.148 "is_configured": true, 00:10:05.148 "data_offset": 2048, 00:10:05.148 "data_size": 63488 00:10:05.148 } 00:10:05.148 ] 00:10:05.148 }' 00:10:05.148 12:02:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:05.148 12:02:12 -- common/autotest_common.sh@10 -- # set +x 00:10:05.713 12:02:12 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:05.713 [2024-07-25 12:02:13.006971] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:05.713 [2024-07-25 12:02:13.006994] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:05.713 [2024-07-25 12:02:13.007026] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:05.970 12:02:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:05.970 "name": "Existed_Raid", 00:10:05.970 "uuid": "7f0b7dba-7827-4ea0-a29b-e5099d695814", 00:10:05.970 "strip_size_kb": 64, 00:10:05.970 "state": "offline", 00:10:05.970 "raid_level": "concat", 00:10:05.970 "superblock": true, 00:10:05.970 "num_base_bdevs": 2, 00:10:05.970 "num_base_bdevs_discovered": 1, 00:10:05.970 "num_base_bdevs_operational": 1, 00:10:05.971 "base_bdevs_list": [ 00:10:05.971 { 00:10:05.971 "name": null, 00:10:05.971 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:05.971 "is_configured": false, 00:10:05.971 "data_offset": 2048, 00:10:05.971 "data_size": 63488 00:10:05.971 }, 00:10:05.971 { 00:10:05.971 "name": "BaseBdev2", 00:10:05.971 "uuid": "0459e3c3-436f-4b8d-8673-dcb25c203fe1", 00:10:05.971 "is_configured": true, 00:10:05.971 "data_offset": 2048, 00:10:05.971 "data_size": 63488 00:10:05.971 } 00:10:05.971 ] 00:10:05.971 }' 00:10:05.971 12:02:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:05.971 12:02:13 -- common/autotest_common.sh@10 -- # set +x 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:06.536 12:02:13 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:06.791 [2024-07-25 12:02:13.982281] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:06.791 [2024-07-25 12:02:13.982334] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x201b790 name Existed_Raid, state offline 00:10:06.791 12:02:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:06.791 12:02:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:06.791 12:02:14 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.791 12:02:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:07.049 12:02:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:07.049 12:02:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:07.049 12:02:14 -- bdev/bdev_raid.sh@287 -- # killprocess 1220624 00:10:07.049 12:02:14 -- common/autotest_common.sh@926 -- # '[' -z 1220624 ']' 00:10:07.049 12:02:14 -- common/autotest_common.sh@930 -- # kill -0 1220624 00:10:07.049 12:02:14 -- common/autotest_common.sh@931 -- # uname 00:10:07.049 12:02:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:07.049 12:02:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1220624 00:10:07.049 12:02:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:07.049 12:02:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:07.049 12:02:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1220624' 00:10:07.049 killing process with pid 1220624 00:10:07.049 12:02:14 -- common/autotest_common.sh@945 -- # kill 1220624 00:10:07.049 [2024-07-25 12:02:14.235426] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:07.049 12:02:14 -- common/autotest_common.sh@950 -- # wait 1220624 00:10:07.049 [2024-07-25 12:02:14.236284] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:07.307 00:10:07.307 real 0m7.621s 00:10:07.307 user 0m13.178s 00:10:07.307 sys 0m1.570s 00:10:07.307 12:02:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:07.307 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:10:07.307 ************************************ 00:10:07.307 END TEST raid_state_function_test_sb 00:10:07.307 ************************************ 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:10:07.307 12:02:14 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:07.307 12:02:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:07.307 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:10:07.307 ************************************ 00:10:07.307 START TEST raid_superblock_test 00:10:07.307 ************************************ 00:10:07.307 12:02:14 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 2 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@357 -- # raid_pid=1221896 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1221896 /var/tmp/spdk-raid.sock 00:10:07.307 12:02:14 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:07.307 12:02:14 -- common/autotest_common.sh@819 -- # '[' -z 1221896 ']' 00:10:07.307 12:02:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:07.307 12:02:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:07.307 12:02:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:07.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:07.307 12:02:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:07.307 12:02:14 -- common/autotest_common.sh@10 -- # set +x 00:10:07.307 [2024-07-25 12:02:14.549021] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:07.308 [2024-07-25 12:02:14.549079] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1221896 ] 00:10:07.565 [2024-07-25 12:02:14.636583] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.565 [2024-07-25 12:02:14.724987] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.565 [2024-07-25 12:02:14.784320] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:07.565 [2024-07-25 12:02:14.784351] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:08.131 12:02:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:08.131 12:02:15 -- common/autotest_common.sh@852 -- # return 0 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.131 12:02:15 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:08.389 malloc1 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:08.389 [2024-07-25 12:02:15.670617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:08.389 [2024-07-25 12:02:15.670653] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.389 [2024-07-25 12:02:15.670686] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109a8d0 00:10:08.389 [2024-07-25 12:02:15.670694] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.389 [2024-07-25 12:02:15.671944] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.389 [2024-07-25 12:02:15.671968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:08.389 pt1 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:08.389 12:02:15 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:08.647 malloc2 00:10:08.647 12:02:15 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.906 [2024-07-25 12:02:16.011347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.906 [2024-07-25 12:02:16.011382] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.906 [2024-07-25 12:02:16.011399] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12421a0 00:10:08.906 [2024-07-25 12:02:16.011407] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.906 [2024-07-25 12:02:16.012565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.906 [2024-07-25 12:02:16.012588] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.906 pt2 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:10:08.906 [2024-07-25 12:02:16.179814] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:08.906 [2024-07-25 12:02:16.180797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.906 [2024-07-25 12:02:16.180903] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1242700 00:10:08.906 [2024-07-25 12:02:16.180912] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:08.906 [2024-07-25 12:02:16.181047] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1241bf0 00:10:08.906 [2024-07-25 12:02:16.181138] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1242700 00:10:08.906 [2024-07-25 12:02:16.181144] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1242700 00:10:08.906 [2024-07-25 12:02:16.181210] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.906 12:02:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:09.164 12:02:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:09.164 "name": "raid_bdev1", 00:10:09.164 "uuid": "42139971-5d77-4911-83ca-72d29d31dfe0", 00:10:09.164 "strip_size_kb": 64, 00:10:09.164 "state": "online", 00:10:09.164 "raid_level": "concat", 00:10:09.164 "superblock": true, 00:10:09.164 "num_base_bdevs": 2, 00:10:09.164 "num_base_bdevs_discovered": 2, 00:10:09.164 "num_base_bdevs_operational": 2, 00:10:09.164 "base_bdevs_list": [ 00:10:09.164 { 00:10:09.164 "name": "pt1", 00:10:09.164 "uuid": "303f022f-2a42-5737-addd-3bee615483b1", 00:10:09.164 "is_configured": true, 00:10:09.164 "data_offset": 2048, 00:10:09.164 "data_size": 63488 00:10:09.164 }, 00:10:09.164 { 00:10:09.164 "name": "pt2", 00:10:09.164 "uuid": "b4d56c6d-e064-53e7-afc8-31cda496c81c", 00:10:09.164 "is_configured": true, 00:10:09.164 "data_offset": 2048, 00:10:09.164 "data_size": 63488 00:10:09.164 } 00:10:09.164 ] 00:10:09.164 }' 00:10:09.164 12:02:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:09.164 12:02:16 -- common/autotest_common.sh@10 -- # set +x 00:10:09.731 12:02:16 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.731 12:02:16 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:09.731 [2024-07-25 12:02:17.030091] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.020 12:02:17 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=42139971-5d77-4911-83ca-72d29d31dfe0 00:10:10.020 12:02:17 -- bdev/bdev_raid.sh@380 -- # '[' -z 42139971-5d77-4911-83ca-72d29d31dfe0 ']' 00:10:10.020 12:02:17 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:10.020 [2024-07-25 12:02:17.198396] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.020 [2024-07-25 12:02:17.198416] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:10.020 [2024-07-25 12:02:17.198457] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.020 [2024-07-25 12:02:17.198486] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.020 [2024-07-25 12:02:17.198494] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1242700 name raid_bdev1, state offline 00:10:10.020 12:02:17 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.020 12:02:17 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:10.278 12:02:17 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:10.537 12:02:17 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:10.537 12:02:17 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:10.796 12:02:17 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:10.796 12:02:17 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:10.796 12:02:17 -- common/autotest_common.sh@640 -- # local es=0 00:10:10.796 12:02:17 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:10.796 12:02:17 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:10.796 12:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:10.796 12:02:17 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:10.796 12:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:10.796 12:02:17 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:10.796 12:02:17 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:10.796 12:02:17 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:10.796 12:02:17 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:10.796 12:02:17 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:10:10.796 [2024-07-25 12:02:18.048579] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:10.796 [2024-07-25 12:02:18.049586] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:10.796 [2024-07-25 12:02:18.049630] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:10.796 [2024-07-25 12:02:18.049660] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:10.796 [2024-07-25 12:02:18.049678] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:10.796 [2024-07-25 12:02:18.049687] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12419a0 name raid_bdev1, state configuring 00:10:10.796 request: 00:10:10.796 { 00:10:10.796 "name": "raid_bdev1", 00:10:10.796 "raid_level": "concat", 00:10:10.796 "base_bdevs": [ 00:10:10.796 "malloc1", 00:10:10.796 "malloc2" 00:10:10.796 ], 00:10:10.796 "superblock": false, 00:10:10.796 "strip_size_kb": 64, 00:10:10.796 "method": "bdev_raid_create", 00:10:10.796 "req_id": 1 00:10:10.796 } 00:10:10.796 Got JSON-RPC error response 00:10:10.796 response: 00:10:10.796 { 00:10:10.796 "code": -17, 00:10:10.796 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:10.796 } 00:10:10.796 12:02:18 -- common/autotest_common.sh@643 -- # es=1 00:10:10.796 12:02:18 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:10.796 12:02:18 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:10.796 12:02:18 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:10.796 12:02:18 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:10.796 12:02:18 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:11.054 12:02:18 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:11.054 12:02:18 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:11.054 12:02:18 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:11.313 [2024-07-25 12:02:18.393450] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:11.313 [2024-07-25 12:02:18.393481] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.313 [2024-07-25 12:02:18.393499] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x109ab00 00:10:11.313 [2024-07-25 12:02:18.393508] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.313 [2024-07-25 12:02:18.394676] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.313 [2024-07-25 12:02:18.394698] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:11.313 [2024-07-25 12:02:18.394747] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:11.313 [2024-07-25 12:02:18.394765] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:11.313 pt1 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:11.313 "name": "raid_bdev1", 00:10:11.313 "uuid": "42139971-5d77-4911-83ca-72d29d31dfe0", 00:10:11.313 "strip_size_kb": 64, 00:10:11.313 "state": "configuring", 00:10:11.313 "raid_level": "concat", 00:10:11.313 "superblock": true, 00:10:11.313 "num_base_bdevs": 2, 00:10:11.313 "num_base_bdevs_discovered": 1, 00:10:11.313 "num_base_bdevs_operational": 2, 00:10:11.313 "base_bdevs_list": [ 00:10:11.313 { 00:10:11.313 "name": "pt1", 00:10:11.313 "uuid": "303f022f-2a42-5737-addd-3bee615483b1", 00:10:11.313 "is_configured": true, 00:10:11.313 "data_offset": 2048, 00:10:11.313 "data_size": 63488 00:10:11.313 }, 00:10:11.313 { 00:10:11.313 "name": null, 00:10:11.313 "uuid": "b4d56c6d-e064-53e7-afc8-31cda496c81c", 00:10:11.313 "is_configured": false, 00:10:11.313 "data_offset": 2048, 00:10:11.313 "data_size": 63488 00:10:11.313 } 00:10:11.313 ] 00:10:11.313 }' 00:10:11.313 12:02:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:11.313 12:02:18 -- common/autotest_common.sh@10 -- # set +x 00:10:11.879 12:02:19 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:10:11.879 12:02:19 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:11.879 12:02:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:11.879 12:02:19 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:12.138 [2024-07-25 12:02:19.207550] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:12.138 [2024-07-25 12:02:19.207586] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:12.138 [2024-07-25 12:02:19.207619] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12437c0 00:10:12.138 [2024-07-25 12:02:19.207629] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:12.138 [2024-07-25 12:02:19.207869] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:12.138 [2024-07-25 12:02:19.207880] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:12.138 [2024-07-25 12:02:19.207926] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:12.138 [2024-07-25 12:02:19.207939] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:12.138 [2024-07-25 12:02:19.208004] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1247ac0 00:10:12.138 [2024-07-25 12:02:19.208011] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:10:12.138 [2024-07-25 12:02:19.208122] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1246040 00:10:12.138 [2024-07-25 12:02:19.208203] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1247ac0 00:10:12.138 [2024-07-25 12:02:19.208209] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1247ac0 00:10:12.138 [2024-07-25 12:02:19.208279] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:12.138 pt2 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:12.138 "name": "raid_bdev1", 00:10:12.138 "uuid": "42139971-5d77-4911-83ca-72d29d31dfe0", 00:10:12.138 "strip_size_kb": 64, 00:10:12.138 "state": "online", 00:10:12.138 "raid_level": "concat", 00:10:12.138 "superblock": true, 00:10:12.138 "num_base_bdevs": 2, 00:10:12.138 "num_base_bdevs_discovered": 2, 00:10:12.138 "num_base_bdevs_operational": 2, 00:10:12.138 "base_bdevs_list": [ 00:10:12.138 { 00:10:12.138 "name": "pt1", 00:10:12.138 "uuid": "303f022f-2a42-5737-addd-3bee615483b1", 00:10:12.138 "is_configured": true, 00:10:12.138 "data_offset": 2048, 00:10:12.138 "data_size": 63488 00:10:12.138 }, 00:10:12.138 { 00:10:12.138 "name": "pt2", 00:10:12.138 "uuid": "b4d56c6d-e064-53e7-afc8-31cda496c81c", 00:10:12.138 "is_configured": true, 00:10:12.138 "data_offset": 2048, 00:10:12.138 "data_size": 63488 00:10:12.138 } 00:10:12.138 ] 00:10:12.138 }' 00:10:12.138 12:02:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:12.138 12:02:19 -- common/autotest_common.sh@10 -- # set +x 00:10:12.705 12:02:19 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:12.705 12:02:19 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:12.963 [2024-07-25 12:02:20.033817] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:12.963 12:02:20 -- bdev/bdev_raid.sh@430 -- # '[' 42139971-5d77-4911-83ca-72d29d31dfe0 '!=' 42139971-5d77-4911-83ca-72d29d31dfe0 ']' 00:10:12.963 12:02:20 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:10:12.963 12:02:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:12.963 12:02:20 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:12.963 12:02:20 -- bdev/bdev_raid.sh@511 -- # killprocess 1221896 00:10:12.963 12:02:20 -- common/autotest_common.sh@926 -- # '[' -z 1221896 ']' 00:10:12.963 12:02:20 -- common/autotest_common.sh@930 -- # kill -0 1221896 00:10:12.963 12:02:20 -- common/autotest_common.sh@931 -- # uname 00:10:12.963 12:02:20 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:12.963 12:02:20 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1221896 00:10:12.963 12:02:20 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:12.963 12:02:20 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:12.963 12:02:20 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1221896' 00:10:12.963 killing process with pid 1221896 00:10:12.963 12:02:20 -- common/autotest_common.sh@945 -- # kill 1221896 00:10:12.963 [2024-07-25 12:02:20.106351] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:12.963 [2024-07-25 12:02:20.106401] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:12.963 [2024-07-25 12:02:20.106431] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:12.963 [2024-07-25 12:02:20.106439] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1247ac0 name raid_bdev1, state offline 00:10:12.963 12:02:20 -- common/autotest_common.sh@950 -- # wait 1221896 00:10:12.963 [2024-07-25 12:02:20.122485] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:13.222 00:10:13.222 real 0m5.845s 00:10:13.222 user 0m9.961s 00:10:13.222 sys 0m1.226s 00:10:13.222 12:02:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:13.222 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 ************************************ 00:10:13.222 END TEST raid_superblock_test 00:10:13.222 ************************************ 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:10:13.222 12:02:20 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:13.222 12:02:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:13.222 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 ************************************ 00:10:13.222 START TEST raid_state_function_test 00:10:13.222 ************************************ 00:10:13.222 12:02:20 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 false 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@226 -- # raid_pid=1222765 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1222765' 00:10:13.222 Process raid pid: 1222765 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:13.222 12:02:20 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1222765 /var/tmp/spdk-raid.sock 00:10:13.222 12:02:20 -- common/autotest_common.sh@819 -- # '[' -z 1222765 ']' 00:10:13.222 12:02:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:13.222 12:02:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:13.222 12:02:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:13.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:13.222 12:02:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:13.222 12:02:20 -- common/autotest_common.sh@10 -- # set +x 00:10:13.222 [2024-07-25 12:02:20.450532] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:13.222 [2024-07-25 12:02:20.450590] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:13.480 [2024-07-25 12:02:20.540230] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.480 [2024-07-25 12:02:20.624662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:13.480 [2024-07-25 12:02:20.679433] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:13.480 [2024-07-25 12:02:20.679459] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:14.047 12:02:21 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:14.047 12:02:21 -- common/autotest_common.sh@852 -- # return 0 00:10:14.047 12:02:21 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:14.306 [2024-07-25 12:02:21.398504] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:14.306 [2024-07-25 12:02:21.398535] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:14.306 [2024-07-25 12:02:21.398542] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:14.306 [2024-07-25 12:02:21.398549] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:14.306 "name": "Existed_Raid", 00:10:14.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.306 "strip_size_kb": 0, 00:10:14.306 "state": "configuring", 00:10:14.306 "raid_level": "raid1", 00:10:14.306 "superblock": false, 00:10:14.306 "num_base_bdevs": 2, 00:10:14.306 "num_base_bdevs_discovered": 0, 00:10:14.306 "num_base_bdevs_operational": 2, 00:10:14.306 "base_bdevs_list": [ 00:10:14.306 { 00:10:14.306 "name": "BaseBdev1", 00:10:14.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.306 "is_configured": false, 00:10:14.306 "data_offset": 0, 00:10:14.306 "data_size": 0 00:10:14.306 }, 00:10:14.306 { 00:10:14.306 "name": "BaseBdev2", 00:10:14.306 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:14.306 "is_configured": false, 00:10:14.306 "data_offset": 0, 00:10:14.306 "data_size": 0 00:10:14.306 } 00:10:14.306 ] 00:10:14.306 }' 00:10:14.306 12:02:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:14.306 12:02:21 -- common/autotest_common.sh@10 -- # set +x 00:10:14.875 12:02:22 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:15.134 [2024-07-25 12:02:22.232705] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.134 [2024-07-25 12:02:22.232727] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1526d40 name Existed_Raid, state configuring 00:10:15.134 12:02:22 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:15.134 [2024-07-25 12:02:22.413174] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:15.134 [2024-07-25 12:02:22.413197] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:15.134 [2024-07-25 12:02:22.413203] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:15.134 [2024-07-25 12:02:22.413211] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:15.134 12:02:22 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:15.393 [2024-07-25 12:02:22.594107] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:15.393 BaseBdev1 00:10:15.393 12:02:22 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:15.393 12:02:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:15.393 12:02:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:15.393 12:02:22 -- common/autotest_common.sh@889 -- # local i 00:10:15.393 12:02:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:15.393 12:02:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:15.393 12:02:22 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:15.652 12:02:22 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:15.652 [ 00:10:15.652 { 00:10:15.652 "name": "BaseBdev1", 00:10:15.652 "aliases": [ 00:10:15.652 "828ebac9-3c20-4a03-9fcc-7124c3b9d691" 00:10:15.652 ], 00:10:15.652 "product_name": "Malloc disk", 00:10:15.652 "block_size": 512, 00:10:15.652 "num_blocks": 65536, 00:10:15.652 "uuid": "828ebac9-3c20-4a03-9fcc-7124c3b9d691", 00:10:15.652 "assigned_rate_limits": { 00:10:15.652 "rw_ios_per_sec": 0, 00:10:15.652 "rw_mbytes_per_sec": 0, 00:10:15.652 "r_mbytes_per_sec": 0, 00:10:15.652 "w_mbytes_per_sec": 0 00:10:15.652 }, 00:10:15.652 "claimed": true, 00:10:15.652 "claim_type": "exclusive_write", 00:10:15.652 "zoned": false, 00:10:15.652 "supported_io_types": { 00:10:15.652 "read": true, 00:10:15.652 "write": true, 00:10:15.652 "unmap": true, 00:10:15.652 "write_zeroes": true, 00:10:15.652 "flush": true, 00:10:15.652 "reset": true, 00:10:15.652 "compare": false, 00:10:15.652 "compare_and_write": false, 00:10:15.652 "abort": true, 00:10:15.652 "nvme_admin": false, 00:10:15.652 "nvme_io": false 00:10:15.652 }, 00:10:15.652 "memory_domains": [ 00:10:15.652 { 00:10:15.652 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.652 "dma_device_type": 2 00:10:15.652 } 00:10:15.652 ], 00:10:15.652 "driver_specific": {} 00:10:15.652 } 00:10:15.652 ] 00:10:15.652 12:02:22 -- common/autotest_common.sh@895 -- # return 0 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:15.652 12:02:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.911 12:02:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:15.911 "name": "Existed_Raid", 00:10:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.911 "strip_size_kb": 0, 00:10:15.911 "state": "configuring", 00:10:15.911 "raid_level": "raid1", 00:10:15.911 "superblock": false, 00:10:15.911 "num_base_bdevs": 2, 00:10:15.911 "num_base_bdevs_discovered": 1, 00:10:15.911 "num_base_bdevs_operational": 2, 00:10:15.911 "base_bdevs_list": [ 00:10:15.911 { 00:10:15.911 "name": "BaseBdev1", 00:10:15.911 "uuid": "828ebac9-3c20-4a03-9fcc-7124c3b9d691", 00:10:15.911 "is_configured": true, 00:10:15.911 "data_offset": 0, 00:10:15.911 "data_size": 65536 00:10:15.911 }, 00:10:15.911 { 00:10:15.911 "name": "BaseBdev2", 00:10:15.911 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:15.911 "is_configured": false, 00:10:15.911 "data_offset": 0, 00:10:15.911 "data_size": 0 00:10:15.911 } 00:10:15.911 ] 00:10:15.911 }' 00:10:15.911 12:02:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:15.911 12:02:23 -- common/autotest_common.sh@10 -- # set +x 00:10:16.478 12:02:23 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:16.478 [2024-07-25 12:02:23.769139] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:16.478 [2024-07-25 12:02:23.769175] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1526fc0 name Existed_Raid, state configuring 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:16.736 [2024-07-25 12:02:23.937584] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.736 [2024-07-25 12:02:23.938660] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:16.736 [2024-07-25 12:02:23.938684] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.736 12:02:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:16.995 12:02:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:16.995 "name": "Existed_Raid", 00:10:16.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.995 "strip_size_kb": 0, 00:10:16.995 "state": "configuring", 00:10:16.995 "raid_level": "raid1", 00:10:16.995 "superblock": false, 00:10:16.995 "num_base_bdevs": 2, 00:10:16.995 "num_base_bdevs_discovered": 1, 00:10:16.995 "num_base_bdevs_operational": 2, 00:10:16.995 "base_bdevs_list": [ 00:10:16.995 { 00:10:16.995 "name": "BaseBdev1", 00:10:16.995 "uuid": "828ebac9-3c20-4a03-9fcc-7124c3b9d691", 00:10:16.995 "is_configured": true, 00:10:16.995 "data_offset": 0, 00:10:16.995 "data_size": 65536 00:10:16.995 }, 00:10:16.995 { 00:10:16.995 "name": "BaseBdev2", 00:10:16.995 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:16.995 "is_configured": false, 00:10:16.995 "data_offset": 0, 00:10:16.995 "data_size": 0 00:10:16.995 } 00:10:16.995 ] 00:10:16.995 }' 00:10:16.995 12:02:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:16.995 12:02:24 -- common/autotest_common.sh@10 -- # set +x 00:10:17.562 12:02:24 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:17.562 [2024-07-25 12:02:24.758627] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:17.562 [2024-07-25 12:02:24.758655] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1526630 00:10:17.562 [2024-07-25 12:02:24.758661] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:17.562 [2024-07-25 12:02:24.758799] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1528ef0 00:10:17.562 [2024-07-25 12:02:24.758887] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1526630 00:10:17.562 [2024-07-25 12:02:24.758893] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1526630 00:10:17.562 [2024-07-25 12:02:24.759019] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.562 BaseBdev2 00:10:17.562 12:02:24 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:17.562 12:02:24 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:17.562 12:02:24 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:17.562 12:02:24 -- common/autotest_common.sh@889 -- # local i 00:10:17.562 12:02:24 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:17.562 12:02:24 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:17.562 12:02:24 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:17.821 12:02:24 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:17.821 [ 00:10:17.821 { 00:10:17.821 "name": "BaseBdev2", 00:10:17.821 "aliases": [ 00:10:17.821 "f46168b6-530e-40ae-86ca-3963e2e58333" 00:10:17.821 ], 00:10:17.821 "product_name": "Malloc disk", 00:10:17.821 "block_size": 512, 00:10:17.821 "num_blocks": 65536, 00:10:17.821 "uuid": "f46168b6-530e-40ae-86ca-3963e2e58333", 00:10:17.821 "assigned_rate_limits": { 00:10:17.821 "rw_ios_per_sec": 0, 00:10:17.821 "rw_mbytes_per_sec": 0, 00:10:17.821 "r_mbytes_per_sec": 0, 00:10:17.821 "w_mbytes_per_sec": 0 00:10:17.821 }, 00:10:17.821 "claimed": true, 00:10:17.821 "claim_type": "exclusive_write", 00:10:17.821 "zoned": false, 00:10:17.821 "supported_io_types": { 00:10:17.821 "read": true, 00:10:17.821 "write": true, 00:10:17.821 "unmap": true, 00:10:17.821 "write_zeroes": true, 00:10:17.821 "flush": true, 00:10:17.821 "reset": true, 00:10:17.821 "compare": false, 00:10:17.821 "compare_and_write": false, 00:10:17.821 "abort": true, 00:10:17.821 "nvme_admin": false, 00:10:17.821 "nvme_io": false 00:10:17.821 }, 00:10:17.821 "memory_domains": [ 00:10:17.821 { 00:10:17.821 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:17.821 "dma_device_type": 2 00:10:17.821 } 00:10:17.821 ], 00:10:17.821 "driver_specific": {} 00:10:17.821 } 00:10:17.821 ] 00:10:18.080 12:02:25 -- common/autotest_common.sh@895 -- # return 0 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:18.080 "name": "Existed_Raid", 00:10:18.080 "uuid": "72cc1dc6-c464-4afb-862d-7704df3de4fc", 00:10:18.080 "strip_size_kb": 0, 00:10:18.080 "state": "online", 00:10:18.080 "raid_level": "raid1", 00:10:18.080 "superblock": false, 00:10:18.080 "num_base_bdevs": 2, 00:10:18.080 "num_base_bdevs_discovered": 2, 00:10:18.080 "num_base_bdevs_operational": 2, 00:10:18.080 "base_bdevs_list": [ 00:10:18.080 { 00:10:18.080 "name": "BaseBdev1", 00:10:18.080 "uuid": "828ebac9-3c20-4a03-9fcc-7124c3b9d691", 00:10:18.080 "is_configured": true, 00:10:18.080 "data_offset": 0, 00:10:18.080 "data_size": 65536 00:10:18.080 }, 00:10:18.080 { 00:10:18.080 "name": "BaseBdev2", 00:10:18.080 "uuid": "f46168b6-530e-40ae-86ca-3963e2e58333", 00:10:18.080 "is_configured": true, 00:10:18.080 "data_offset": 0, 00:10:18.080 "data_size": 65536 00:10:18.080 } 00:10:18.080 ] 00:10:18.080 }' 00:10:18.080 12:02:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:18.080 12:02:25 -- common/autotest_common.sh@10 -- # set +x 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:18.646 [2024-07-25 12:02:25.921756] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:18.646 12:02:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:18.647 12:02:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:18.647 12:02:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:18.647 12:02:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:18.647 12:02:25 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:18.647 12:02:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:18.905 12:02:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:18.905 "name": "Existed_Raid", 00:10:18.905 "uuid": "72cc1dc6-c464-4afb-862d-7704df3de4fc", 00:10:18.905 "strip_size_kb": 0, 00:10:18.905 "state": "online", 00:10:18.905 "raid_level": "raid1", 00:10:18.905 "superblock": false, 00:10:18.905 "num_base_bdevs": 2, 00:10:18.905 "num_base_bdevs_discovered": 1, 00:10:18.905 "num_base_bdevs_operational": 1, 00:10:18.905 "base_bdevs_list": [ 00:10:18.905 { 00:10:18.905 "name": null, 00:10:18.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:18.905 "is_configured": false, 00:10:18.905 "data_offset": 0, 00:10:18.905 "data_size": 65536 00:10:18.905 }, 00:10:18.905 { 00:10:18.905 "name": "BaseBdev2", 00:10:18.905 "uuid": "f46168b6-530e-40ae-86ca-3963e2e58333", 00:10:18.905 "is_configured": true, 00:10:18.905 "data_offset": 0, 00:10:18.905 "data_size": 65536 00:10:18.905 } 00:10:18.905 ] 00:10:18.905 }' 00:10:18.905 12:02:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:18.905 12:02:26 -- common/autotest_common.sh@10 -- # set +x 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:19.472 12:02:26 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:19.730 [2024-07-25 12:02:26.889016] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:19.730 [2024-07-25 12:02:26.889041] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.730 [2024-07-25 12:02:26.889069] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.730 [2024-07-25 12:02:26.900954] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.730 [2024-07-25 12:02:26.900976] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1526630 name Existed_Raid, state offline 00:10:19.730 12:02:26 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:19.730 12:02:26 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:19.730 12:02:26 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:19.730 12:02:26 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:19.989 12:02:27 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:19.989 12:02:27 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:19.989 12:02:27 -- bdev/bdev_raid.sh@287 -- # killprocess 1222765 00:10:19.989 12:02:27 -- common/autotest_common.sh@926 -- # '[' -z 1222765 ']' 00:10:19.989 12:02:27 -- common/autotest_common.sh@930 -- # kill -0 1222765 00:10:19.989 12:02:27 -- common/autotest_common.sh@931 -- # uname 00:10:19.989 12:02:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:19.989 12:02:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1222765 00:10:19.989 12:02:27 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:19.989 12:02:27 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:19.989 12:02:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1222765' 00:10:19.989 killing process with pid 1222765 00:10:19.989 12:02:27 -- common/autotest_common.sh@945 -- # kill 1222765 00:10:19.989 [2024-07-25 12:02:27.128119] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:19.989 12:02:27 -- common/autotest_common.sh@950 -- # wait 1222765 00:10:19.989 [2024-07-25 12:02:27.129024] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:20.248 00:10:20.248 real 0m6.967s 00:10:20.248 user 0m12.028s 00:10:20.248 sys 0m1.420s 00:10:20.248 12:02:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.248 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.248 ************************************ 00:10:20.248 END TEST raid_state_function_test 00:10:20.248 ************************************ 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:10:20.248 12:02:27 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:20.248 12:02:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.248 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.248 ************************************ 00:10:20.248 START TEST raid_state_function_test_sb 00:10:20.248 ************************************ 00:10:20.248 12:02:27 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 2 true 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=2 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@226 -- # raid_pid=1223864 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1223864' 00:10:20.248 Process raid pid: 1223864 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:20.248 12:02:27 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1223864 /var/tmp/spdk-raid.sock 00:10:20.248 12:02:27 -- common/autotest_common.sh@819 -- # '[' -z 1223864 ']' 00:10:20.248 12:02:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:20.248 12:02:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.248 12:02:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:20.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:20.248 12:02:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.248 12:02:27 -- common/autotest_common.sh@10 -- # set +x 00:10:20.248 [2024-07-25 12:02:27.466903] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:20.248 [2024-07-25 12:02:27.466959] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.248 [2024-07-25 12:02:27.554666] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.506 [2024-07-25 12:02:27.643804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.506 [2024-07-25 12:02:27.697628] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.506 [2024-07-25 12:02:27.697651] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:21.075 12:02:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.075 12:02:28 -- common/autotest_common.sh@852 -- # return 0 00:10:21.075 12:02:28 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:21.334 [2024-07-25 12:02:28.401307] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.334 [2024-07-25 12:02:28.401340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.334 [2024-07-25 12:02:28.401347] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.334 [2024-07-25 12:02:28.401355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:21.334 "name": "Existed_Raid", 00:10:21.334 "uuid": "0bda0eab-39d2-4a10-8526-bc962255c9d2", 00:10:21.334 "strip_size_kb": 0, 00:10:21.334 "state": "configuring", 00:10:21.334 "raid_level": "raid1", 00:10:21.334 "superblock": true, 00:10:21.334 "num_base_bdevs": 2, 00:10:21.334 "num_base_bdevs_discovered": 0, 00:10:21.334 "num_base_bdevs_operational": 2, 00:10:21.334 "base_bdevs_list": [ 00:10:21.334 { 00:10:21.334 "name": "BaseBdev1", 00:10:21.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.334 "is_configured": false, 00:10:21.334 "data_offset": 0, 00:10:21.334 "data_size": 0 00:10:21.334 }, 00:10:21.334 { 00:10:21.334 "name": "BaseBdev2", 00:10:21.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.334 "is_configured": false, 00:10:21.334 "data_offset": 0, 00:10:21.334 "data_size": 0 00:10:21.334 } 00:10:21.334 ] 00:10:21.334 }' 00:10:21.334 12:02:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:21.334 12:02:28 -- common/autotest_common.sh@10 -- # set +x 00:10:21.902 12:02:29 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:22.161 [2024-07-25 12:02:29.219318] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:22.161 [2024-07-25 12:02:29.219340] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205ed40 name Existed_Raid, state configuring 00:10:22.161 12:02:29 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:22.161 [2024-07-25 12:02:29.387778] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:22.161 [2024-07-25 12:02:29.387804] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:22.161 [2024-07-25 12:02:29.387810] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:22.161 [2024-07-25 12:02:29.387818] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:22.161 12:02:29 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:22.420 [2024-07-25 12:02:29.565116] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:22.420 BaseBdev1 00:10:22.420 12:02:29 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:22.420 12:02:29 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:22.420 12:02:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:22.420 12:02:29 -- common/autotest_common.sh@889 -- # local i 00:10:22.420 12:02:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:22.420 12:02:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:22.420 12:02:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:22.679 12:02:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:22.679 [ 00:10:22.679 { 00:10:22.679 "name": "BaseBdev1", 00:10:22.679 "aliases": [ 00:10:22.679 "d7ce3101-f6a2-4ba5-a95f-f09a56791b77" 00:10:22.679 ], 00:10:22.679 "product_name": "Malloc disk", 00:10:22.679 "block_size": 512, 00:10:22.679 "num_blocks": 65536, 00:10:22.679 "uuid": "d7ce3101-f6a2-4ba5-a95f-f09a56791b77", 00:10:22.679 "assigned_rate_limits": { 00:10:22.679 "rw_ios_per_sec": 0, 00:10:22.679 "rw_mbytes_per_sec": 0, 00:10:22.679 "r_mbytes_per_sec": 0, 00:10:22.679 "w_mbytes_per_sec": 0 00:10:22.679 }, 00:10:22.679 "claimed": true, 00:10:22.679 "claim_type": "exclusive_write", 00:10:22.679 "zoned": false, 00:10:22.679 "supported_io_types": { 00:10:22.679 "read": true, 00:10:22.679 "write": true, 00:10:22.679 "unmap": true, 00:10:22.679 "write_zeroes": true, 00:10:22.679 "flush": true, 00:10:22.679 "reset": true, 00:10:22.679 "compare": false, 00:10:22.679 "compare_and_write": false, 00:10:22.679 "abort": true, 00:10:22.679 "nvme_admin": false, 00:10:22.679 "nvme_io": false 00:10:22.679 }, 00:10:22.679 "memory_domains": [ 00:10:22.679 { 00:10:22.679 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.679 "dma_device_type": 2 00:10:22.679 } 00:10:22.679 ], 00:10:22.679 "driver_specific": {} 00:10:22.679 } 00:10:22.679 ] 00:10:22.679 12:02:29 -- common/autotest_common.sh@895 -- # return 0 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:22.679 12:02:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.938 12:02:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:22.938 "name": "Existed_Raid", 00:10:22.938 "uuid": "ce050315-51ad-4a2e-9998-b8452e7986b4", 00:10:22.938 "strip_size_kb": 0, 00:10:22.938 "state": "configuring", 00:10:22.938 "raid_level": "raid1", 00:10:22.938 "superblock": true, 00:10:22.938 "num_base_bdevs": 2, 00:10:22.938 "num_base_bdevs_discovered": 1, 00:10:22.938 "num_base_bdevs_operational": 2, 00:10:22.938 "base_bdevs_list": [ 00:10:22.938 { 00:10:22.938 "name": "BaseBdev1", 00:10:22.938 "uuid": "d7ce3101-f6a2-4ba5-a95f-f09a56791b77", 00:10:22.938 "is_configured": true, 00:10:22.938 "data_offset": 2048, 00:10:22.938 "data_size": 63488 00:10:22.938 }, 00:10:22.938 { 00:10:22.938 "name": "BaseBdev2", 00:10:22.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.938 "is_configured": false, 00:10:22.938 "data_offset": 0, 00:10:22.938 "data_size": 0 00:10:22.938 } 00:10:22.938 ] 00:10:22.938 }' 00:10:22.938 12:02:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:22.938 12:02:30 -- common/autotest_common.sh@10 -- # set +x 00:10:23.504 12:02:30 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:23.504 [2024-07-25 12:02:30.712073] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:23.504 [2024-07-25 12:02:30.712110] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205efc0 name Existed_Raid, state configuring 00:10:23.504 12:02:30 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:23.504 12:02:30 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:23.762 12:02:30 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:23.762 BaseBdev1 00:10:24.029 12:02:31 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:24.029 12:02:31 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:24.029 12:02:31 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:24.029 12:02:31 -- common/autotest_common.sh@889 -- # local i 00:10:24.029 12:02:31 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:24.029 12:02:31 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:24.029 12:02:31 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:24.029 12:02:31 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:24.326 [ 00:10:24.326 { 00:10:24.326 "name": "BaseBdev1", 00:10:24.326 "aliases": [ 00:10:24.326 "200344a8-0cd9-4b9f-a0e6-300bc55988dc" 00:10:24.326 ], 00:10:24.326 "product_name": "Malloc disk", 00:10:24.326 "block_size": 512, 00:10:24.326 "num_blocks": 65536, 00:10:24.326 "uuid": "200344a8-0cd9-4b9f-a0e6-300bc55988dc", 00:10:24.326 "assigned_rate_limits": { 00:10:24.326 "rw_ios_per_sec": 0, 00:10:24.326 "rw_mbytes_per_sec": 0, 00:10:24.326 "r_mbytes_per_sec": 0, 00:10:24.326 "w_mbytes_per_sec": 0 00:10:24.326 }, 00:10:24.326 "claimed": false, 00:10:24.326 "zoned": false, 00:10:24.326 "supported_io_types": { 00:10:24.326 "read": true, 00:10:24.326 "write": true, 00:10:24.326 "unmap": true, 00:10:24.326 "write_zeroes": true, 00:10:24.326 "flush": true, 00:10:24.326 "reset": true, 00:10:24.326 "compare": false, 00:10:24.326 "compare_and_write": false, 00:10:24.326 "abort": true, 00:10:24.326 "nvme_admin": false, 00:10:24.326 "nvme_io": false 00:10:24.326 }, 00:10:24.326 "memory_domains": [ 00:10:24.326 { 00:10:24.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.326 "dma_device_type": 2 00:10:24.326 } 00:10:24.326 ], 00:10:24.326 "driver_specific": {} 00:10:24.326 } 00:10:24.326 ] 00:10:24.326 12:02:31 -- common/autotest_common.sh@895 -- # return 0 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:10:24.326 [2024-07-25 12:02:31.562883] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.326 [2024-07-25 12:02:31.563838] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:24.326 [2024-07-25 12:02:31.563865] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:24.326 12:02:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.584 12:02:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:24.584 "name": "Existed_Raid", 00:10:24.584 "uuid": "7ee4b82a-cddd-4b4d-ac9a-1f4d6b0f27dc", 00:10:24.584 "strip_size_kb": 0, 00:10:24.584 "state": "configuring", 00:10:24.584 "raid_level": "raid1", 00:10:24.584 "superblock": true, 00:10:24.584 "num_base_bdevs": 2, 00:10:24.584 "num_base_bdevs_discovered": 1, 00:10:24.584 "num_base_bdevs_operational": 2, 00:10:24.584 "base_bdevs_list": [ 00:10:24.584 { 00:10:24.584 "name": "BaseBdev1", 00:10:24.584 "uuid": "200344a8-0cd9-4b9f-a0e6-300bc55988dc", 00:10:24.584 "is_configured": true, 00:10:24.584 "data_offset": 2048, 00:10:24.584 "data_size": 63488 00:10:24.584 }, 00:10:24.584 { 00:10:24.584 "name": "BaseBdev2", 00:10:24.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.584 "is_configured": false, 00:10:24.584 "data_offset": 0, 00:10:24.584 "data_size": 0 00:10:24.584 } 00:10:24.584 ] 00:10:24.584 }' 00:10:24.584 12:02:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:24.584 12:02:31 -- common/autotest_common.sh@10 -- # set +x 00:10:25.151 12:02:32 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:25.151 [2024-07-25 12:02:32.380997] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:25.151 [2024-07-25 12:02:32.381135] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x2204790 00:10:25.151 [2024-07-25 12:02:32.381146] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:25.151 [2024-07-25 12:02:32.381289] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x205e300 00:10:25.151 [2024-07-25 12:02:32.381377] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2204790 00:10:25.151 [2024-07-25 12:02:32.381383] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2204790 00:10:25.151 [2024-07-25 12:02:32.381449] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:25.151 BaseBdev2 00:10:25.151 12:02:32 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:25.151 12:02:32 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:25.151 12:02:32 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:25.151 12:02:32 -- common/autotest_common.sh@889 -- # local i 00:10:25.151 12:02:32 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:25.151 12:02:32 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:25.152 12:02:32 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:25.410 12:02:32 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:25.410 [ 00:10:25.410 { 00:10:25.410 "name": "BaseBdev2", 00:10:25.410 "aliases": [ 00:10:25.410 "c22cd4e0-d131-4ea1-9c18-03f686fc67a9" 00:10:25.410 ], 00:10:25.410 "product_name": "Malloc disk", 00:10:25.410 "block_size": 512, 00:10:25.410 "num_blocks": 65536, 00:10:25.410 "uuid": "c22cd4e0-d131-4ea1-9c18-03f686fc67a9", 00:10:25.410 "assigned_rate_limits": { 00:10:25.410 "rw_ios_per_sec": 0, 00:10:25.410 "rw_mbytes_per_sec": 0, 00:10:25.410 "r_mbytes_per_sec": 0, 00:10:25.410 "w_mbytes_per_sec": 0 00:10:25.410 }, 00:10:25.410 "claimed": true, 00:10:25.410 "claim_type": "exclusive_write", 00:10:25.410 "zoned": false, 00:10:25.410 "supported_io_types": { 00:10:25.410 "read": true, 00:10:25.410 "write": true, 00:10:25.410 "unmap": true, 00:10:25.410 "write_zeroes": true, 00:10:25.410 "flush": true, 00:10:25.410 "reset": true, 00:10:25.410 "compare": false, 00:10:25.410 "compare_and_write": false, 00:10:25.410 "abort": true, 00:10:25.410 "nvme_admin": false, 00:10:25.410 "nvme_io": false 00:10:25.410 }, 00:10:25.410 "memory_domains": [ 00:10:25.410 { 00:10:25.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.410 "dma_device_type": 2 00:10:25.410 } 00:10:25.410 ], 00:10:25.410 "driver_specific": {} 00:10:25.410 } 00:10:25.410 ] 00:10:25.410 12:02:32 -- common/autotest_common.sh@895 -- # return 0 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:25.410 12:02:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.411 12:02:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.669 12:02:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:25.669 "name": "Existed_Raid", 00:10:25.669 "uuid": "7ee4b82a-cddd-4b4d-ac9a-1f4d6b0f27dc", 00:10:25.669 "strip_size_kb": 0, 00:10:25.669 "state": "online", 00:10:25.669 "raid_level": "raid1", 00:10:25.669 "superblock": true, 00:10:25.669 "num_base_bdevs": 2, 00:10:25.669 "num_base_bdevs_discovered": 2, 00:10:25.669 "num_base_bdevs_operational": 2, 00:10:25.669 "base_bdevs_list": [ 00:10:25.669 { 00:10:25.669 "name": "BaseBdev1", 00:10:25.669 "uuid": "200344a8-0cd9-4b9f-a0e6-300bc55988dc", 00:10:25.669 "is_configured": true, 00:10:25.669 "data_offset": 2048, 00:10:25.669 "data_size": 63488 00:10:25.669 }, 00:10:25.669 { 00:10:25.669 "name": "BaseBdev2", 00:10:25.669 "uuid": "c22cd4e0-d131-4ea1-9c18-03f686fc67a9", 00:10:25.669 "is_configured": true, 00:10:25.669 "data_offset": 2048, 00:10:25.669 "data_size": 63488 00:10:25.669 } 00:10:25.669 ] 00:10:25.669 }' 00:10:25.669 12:02:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:25.669 12:02:32 -- common/autotest_common.sh@10 -- # set +x 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:26.240 [2024-07-25 12:02:33.523979] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:26.240 12:02:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:26.498 "name": "Existed_Raid", 00:10:26.498 "uuid": "7ee4b82a-cddd-4b4d-ac9a-1f4d6b0f27dc", 00:10:26.498 "strip_size_kb": 0, 00:10:26.498 "state": "online", 00:10:26.498 "raid_level": "raid1", 00:10:26.498 "superblock": true, 00:10:26.498 "num_base_bdevs": 2, 00:10:26.498 "num_base_bdevs_discovered": 1, 00:10:26.498 "num_base_bdevs_operational": 1, 00:10:26.498 "base_bdevs_list": [ 00:10:26.498 { 00:10:26.498 "name": null, 00:10:26.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.498 "is_configured": false, 00:10:26.498 "data_offset": 2048, 00:10:26.498 "data_size": 63488 00:10:26.498 }, 00:10:26.498 { 00:10:26.498 "name": "BaseBdev2", 00:10:26.498 "uuid": "c22cd4e0-d131-4ea1-9c18-03f686fc67a9", 00:10:26.498 "is_configured": true, 00:10:26.498 "data_offset": 2048, 00:10:26.498 "data_size": 63488 00:10:26.498 } 00:10:26.498 ] 00:10:26.498 }' 00:10:26.498 12:02:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:26.498 12:02:33 -- common/autotest_common.sh@10 -- # set +x 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:27.064 12:02:34 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:27.323 [2024-07-25 12:02:34.515336] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:27.323 [2024-07-25 12:02:34.515362] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:27.323 [2024-07-25 12:02:34.515403] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:27.323 [2024-07-25 12:02:34.527445] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:27.323 [2024-07-25 12:02:34.527468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2204790 name Existed_Raid, state offline 00:10:27.323 12:02:34 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:27.323 12:02:34 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:27.323 12:02:34 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.323 12:02:34 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:27.582 12:02:34 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:27.582 12:02:34 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:27.582 12:02:34 -- bdev/bdev_raid.sh@287 -- # killprocess 1223864 00:10:27.582 12:02:34 -- common/autotest_common.sh@926 -- # '[' -z 1223864 ']' 00:10:27.582 12:02:34 -- common/autotest_common.sh@930 -- # kill -0 1223864 00:10:27.582 12:02:34 -- common/autotest_common.sh@931 -- # uname 00:10:27.582 12:02:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:27.582 12:02:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1223864 00:10:27.582 12:02:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:27.582 12:02:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:27.582 12:02:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1223864' 00:10:27.582 killing process with pid 1223864 00:10:27.582 12:02:34 -- common/autotest_common.sh@945 -- # kill 1223864 00:10:27.582 [2024-07-25 12:02:34.759636] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:27.582 12:02:34 -- common/autotest_common.sh@950 -- # wait 1223864 00:10:27.582 [2024-07-25 12:02:34.760445] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:27.841 12:02:34 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:27.841 00:10:27.841 real 0m7.562s 00:10:27.841 user 0m13.111s 00:10:27.841 sys 0m1.508s 00:10:27.841 12:02:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:27.841 12:02:34 -- common/autotest_common.sh@10 -- # set +x 00:10:27.841 ************************************ 00:10:27.841 END TEST raid_state_function_test_sb 00:10:27.841 ************************************ 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:10:27.841 12:02:35 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:27.841 12:02:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:27.841 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.841 ************************************ 00:10:27.841 START TEST raid_superblock_test 00:10:27.841 ************************************ 00:10:27.841 12:02:35 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 2 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=2 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@357 -- # raid_pid=1225125 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1225125 /var/tmp/spdk-raid.sock 00:10:27.841 12:02:35 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:27.841 12:02:35 -- common/autotest_common.sh@819 -- # '[' -z 1225125 ']' 00:10:27.841 12:02:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:27.841 12:02:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:27.841 12:02:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:27.841 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:27.841 12:02:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:27.841 12:02:35 -- common/autotest_common.sh@10 -- # set +x 00:10:27.841 [2024-07-25 12:02:35.074661] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:27.841 [2024-07-25 12:02:35.074719] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1225125 ] 00:10:28.099 [2024-07-25 12:02:35.162291] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.099 [2024-07-25 12:02:35.244952] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.099 [2024-07-25 12:02:35.300018] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.099 [2024-07-25 12:02:35.300052] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.666 12:02:35 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:28.666 12:02:35 -- common/autotest_common.sh@852 -- # return 0 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:28.666 12:02:35 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:28.925 malloc1 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:28.925 [2024-07-25 12:02:36.213051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:28.925 [2024-07-25 12:02:36.213094] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:28.925 [2024-07-25 12:02:36.213124] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9408d0 00:10:28.925 [2024-07-25 12:02:36.213133] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:28.925 [2024-07-25 12:02:36.214253] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:28.925 [2024-07-25 12:02:36.214280] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:28.925 pt1 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:28.925 12:02:36 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:29.184 12:02:36 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:29.184 12:02:36 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:29.184 12:02:36 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:29.184 12:02:36 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:29.184 malloc2 00:10:29.184 12:02:36 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:29.442 [2024-07-25 12:02:36.537756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:29.442 [2024-07-25 12:02:36.537794] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.442 [2024-07-25 12:02:36.537823] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae81a0 00:10:29.442 [2024-07-25 12:02:36.537832] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.442 [2024-07-25 12:02:36.538787] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.442 [2024-07-25 12:02:36.538808] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:29.442 pt2 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:10:29.442 [2024-07-25 12:02:36.698180] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:29.442 [2024-07-25 12:02:36.698943] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:29.442 [2024-07-25 12:02:36.699047] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xae8700 00:10:29.442 [2024-07-25 12:02:36.699056] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:29.442 [2024-07-25 12:02:36.699172] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x93f5a0 00:10:29.442 [2024-07-25 12:02:36.699263] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xae8700 00:10:29.442 [2024-07-25 12:02:36.699269] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xae8700 00:10:29.442 [2024-07-25 12:02:36.699352] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.442 12:02:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.701 12:02:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:29.701 "name": "raid_bdev1", 00:10:29.701 "uuid": "e2718b13-f859-4d9a-8ff6-40e93e256dbc", 00:10:29.701 "strip_size_kb": 0, 00:10:29.701 "state": "online", 00:10:29.701 "raid_level": "raid1", 00:10:29.701 "superblock": true, 00:10:29.701 "num_base_bdevs": 2, 00:10:29.701 "num_base_bdevs_discovered": 2, 00:10:29.701 "num_base_bdevs_operational": 2, 00:10:29.701 "base_bdevs_list": [ 00:10:29.701 { 00:10:29.701 "name": "pt1", 00:10:29.701 "uuid": "cc4faf47-8a39-588e-bad8-84457a323929", 00:10:29.701 "is_configured": true, 00:10:29.701 "data_offset": 2048, 00:10:29.701 "data_size": 63488 00:10:29.701 }, 00:10:29.701 { 00:10:29.701 "name": "pt2", 00:10:29.701 "uuid": "88a9f326-81ca-56d4-9b88-9ac5d97ee978", 00:10:29.701 "is_configured": true, 00:10:29.701 "data_offset": 2048, 00:10:29.701 "data_size": 63488 00:10:29.701 } 00:10:29.701 ] 00:10:29.701 }' 00:10:29.701 12:02:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:29.701 12:02:36 -- common/autotest_common.sh@10 -- # set +x 00:10:30.268 12:02:37 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:30.268 12:02:37 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:30.268 [2024-07-25 12:02:37.528439] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:30.268 12:02:37 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=e2718b13-f859-4d9a-8ff6-40e93e256dbc 00:10:30.268 12:02:37 -- bdev/bdev_raid.sh@380 -- # '[' -z e2718b13-f859-4d9a-8ff6-40e93e256dbc ']' 00:10:30.268 12:02:37 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:30.527 [2024-07-25 12:02:37.704741] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:30.527 [2024-07-25 12:02:37.704758] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:30.527 [2024-07-25 12:02:37.704797] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:30.527 [2024-07-25 12:02:37.704836] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:30.527 [2024-07-25 12:02:37.704844] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xae8700 name raid_bdev1, state offline 00:10:30.527 12:02:37 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:30.527 12:02:37 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:30.786 12:02:37 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:30.786 12:02:37 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:30.786 12:02:37 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.786 12:02:37 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:30.786 12:02:38 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:30.786 12:02:38 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:31.045 12:02:38 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:31.045 12:02:38 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:31.304 12:02:38 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:31.304 12:02:38 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:31.304 12:02:38 -- common/autotest_common.sh@640 -- # local es=0 00:10:31.304 12:02:38 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:31.304 12:02:38 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:31.304 12:02:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.304 12:02:38 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:31.304 12:02:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.304 12:02:38 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:31.304 12:02:38 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:31.304 12:02:38 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:31.304 12:02:38 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:31.304 12:02:38 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:10:31.304 [2024-07-25 12:02:38.554913] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:31.304 [2024-07-25 12:02:38.555934] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:31.304 [2024-07-25 12:02:38.555980] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:31.304 [2024-07-25 12:02:38.556011] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:31.304 [2024-07-25 12:02:38.556023] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.304 [2024-07-25 12:02:38.556029] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9411c0 name raid_bdev1, state configuring 00:10:31.304 request: 00:10:31.304 { 00:10:31.304 "name": "raid_bdev1", 00:10:31.304 "raid_level": "raid1", 00:10:31.304 "base_bdevs": [ 00:10:31.304 "malloc1", 00:10:31.304 "malloc2" 00:10:31.304 ], 00:10:31.304 "superblock": false, 00:10:31.304 "method": "bdev_raid_create", 00:10:31.304 "req_id": 1 00:10:31.304 } 00:10:31.304 Got JSON-RPC error response 00:10:31.304 response: 00:10:31.304 { 00:10:31.304 "code": -17, 00:10:31.304 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:31.304 } 00:10:31.304 12:02:38 -- common/autotest_common.sh@643 -- # es=1 00:10:31.304 12:02:38 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:31.304 12:02:38 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:31.304 12:02:38 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:31.304 12:02:38 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.304 12:02:38 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:31.563 12:02:38 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:31.563 12:02:38 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:31.563 12:02:38 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:31.822 [2024-07-25 12:02:38.895751] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:31.822 [2024-07-25 12:02:38.895790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:31.822 [2024-07-25 12:02:38.895822] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x940b00 00:10:31.822 [2024-07-25 12:02:38.895831] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:31.822 [2024-07-25 12:02:38.897016] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:31.822 [2024-07-25 12:02:38.897039] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:31.822 [2024-07-25 12:02:38.897091] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:31.822 [2024-07-25 12:02:38.897109] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:31.822 pt1 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.822 12:02:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:31.822 12:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:31.822 "name": "raid_bdev1", 00:10:31.822 "uuid": "e2718b13-f859-4d9a-8ff6-40e93e256dbc", 00:10:31.822 "strip_size_kb": 0, 00:10:31.822 "state": "configuring", 00:10:31.822 "raid_level": "raid1", 00:10:31.822 "superblock": true, 00:10:31.822 "num_base_bdevs": 2, 00:10:31.822 "num_base_bdevs_discovered": 1, 00:10:31.822 "num_base_bdevs_operational": 2, 00:10:31.822 "base_bdevs_list": [ 00:10:31.822 { 00:10:31.822 "name": "pt1", 00:10:31.822 "uuid": "cc4faf47-8a39-588e-bad8-84457a323929", 00:10:31.822 "is_configured": true, 00:10:31.822 "data_offset": 2048, 00:10:31.822 "data_size": 63488 00:10:31.822 }, 00:10:31.822 { 00:10:31.822 "name": null, 00:10:31.822 "uuid": "88a9f326-81ca-56d4-9b88-9ac5d97ee978", 00:10:31.822 "is_configured": false, 00:10:31.822 "data_offset": 2048, 00:10:31.822 "data_size": 63488 00:10:31.822 } 00:10:31.822 ] 00:10:31.822 }' 00:10:31.822 12:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:31.822 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:32.390 12:02:39 -- bdev/bdev_raid.sh@414 -- # '[' 2 -gt 2 ']' 00:10:32.390 12:02:39 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:32.390 12:02:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:32.390 12:02:39 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:32.390 [2024-07-25 12:02:39.689796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:32.390 [2024-07-25 12:02:39.689839] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:32.390 [2024-07-25 12:02:39.689855] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae5e60 00:10:32.390 [2024-07-25 12:02:39.689862] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:32.390 [2024-07-25 12:02:39.690110] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:32.390 [2024-07-25 12:02:39.690121] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:32.390 [2024-07-25 12:02:39.690166] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:32.390 [2024-07-25 12:02:39.690178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:32.390 [2024-07-25 12:02:39.690243] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x93fe60 00:10:32.390 [2024-07-25 12:02:39.690249] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:32.390 [2024-07-25 12:02:39.690387] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x941e00 00:10:32.390 [2024-07-25 12:02:39.690475] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x93fe60 00:10:32.390 [2024-07-25 12:02:39.690482] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x93fe60 00:10:32.390 [2024-07-25 12:02:39.690549] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:32.390 pt2 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:32.649 12:02:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:32.649 "name": "raid_bdev1", 00:10:32.649 "uuid": "e2718b13-f859-4d9a-8ff6-40e93e256dbc", 00:10:32.649 "strip_size_kb": 0, 00:10:32.649 "state": "online", 00:10:32.649 "raid_level": "raid1", 00:10:32.649 "superblock": true, 00:10:32.649 "num_base_bdevs": 2, 00:10:32.649 "num_base_bdevs_discovered": 2, 00:10:32.649 "num_base_bdevs_operational": 2, 00:10:32.649 "base_bdevs_list": [ 00:10:32.649 { 00:10:32.649 "name": "pt1", 00:10:32.649 "uuid": "cc4faf47-8a39-588e-bad8-84457a323929", 00:10:32.649 "is_configured": true, 00:10:32.649 "data_offset": 2048, 00:10:32.649 "data_size": 63488 00:10:32.649 }, 00:10:32.649 { 00:10:32.649 "name": "pt2", 00:10:32.649 "uuid": "88a9f326-81ca-56d4-9b88-9ac5d97ee978", 00:10:32.649 "is_configured": true, 00:10:32.649 "data_offset": 2048, 00:10:32.650 "data_size": 63488 00:10:32.650 } 00:10:32.650 ] 00:10:32.650 }' 00:10:32.650 12:02:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:32.650 12:02:39 -- common/autotest_common.sh@10 -- # set +x 00:10:33.216 12:02:40 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:33.216 12:02:40 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:10:33.216 [2024-07-25 12:02:40.524101] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@430 -- # '[' e2718b13-f859-4d9a-8ff6-40e93e256dbc '!=' e2718b13-f859-4d9a-8ff6-40e93e256dbc ']' 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@436 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:33.475 [2024-07-25 12:02:40.696413] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.475 12:02:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:33.734 12:02:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:33.734 "name": "raid_bdev1", 00:10:33.734 "uuid": "e2718b13-f859-4d9a-8ff6-40e93e256dbc", 00:10:33.734 "strip_size_kb": 0, 00:10:33.734 "state": "online", 00:10:33.734 "raid_level": "raid1", 00:10:33.734 "superblock": true, 00:10:33.734 "num_base_bdevs": 2, 00:10:33.734 "num_base_bdevs_discovered": 1, 00:10:33.734 "num_base_bdevs_operational": 1, 00:10:33.734 "base_bdevs_list": [ 00:10:33.734 { 00:10:33.734 "name": null, 00:10:33.734 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.734 "is_configured": false, 00:10:33.734 "data_offset": 2048, 00:10:33.734 "data_size": 63488 00:10:33.734 }, 00:10:33.734 { 00:10:33.734 "name": "pt2", 00:10:33.734 "uuid": "88a9f326-81ca-56d4-9b88-9ac5d97ee978", 00:10:33.734 "is_configured": true, 00:10:33.734 "data_offset": 2048, 00:10:33.734 "data_size": 63488 00:10:33.734 } 00:10:33.734 ] 00:10:33.734 }' 00:10:33.734 12:02:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:33.734 12:02:40 -- common/autotest_common.sh@10 -- # set +x 00:10:34.305 12:02:41 -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:34.305 [2024-07-25 12:02:41.526718] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:34.305 [2024-07-25 12:02:41.526742] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:34.305 [2024-07-25 12:02:41.526784] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:34.305 [2024-07-25 12:02:41.526814] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:34.305 [2024-07-25 12:02:41.526821] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x93fe60 name raid_bdev1, state offline 00:10:34.305 12:02:41 -- bdev/bdev_raid.sh@443 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.305 12:02:41 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:10:34.568 12:02:41 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:10:34.568 12:02:41 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:10:34.568 12:02:41 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:10:34.568 12:02:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:34.568 12:02:41 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@462 -- # i=1 00:10:34.827 12:02:41 -- bdev/bdev_raid.sh@463 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:34.827 [2024-07-25 12:02:42.044038] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:34.827 [2024-07-25 12:02:42.044073] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:34.827 [2024-07-25 12:02:42.044086] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xae7780 00:10:34.827 [2024-07-25 12:02:42.044094] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:34.827 [2024-07-25 12:02:42.045266] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:34.827 [2024-07-25 12:02:42.045296] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:34.827 [2024-07-25 12:02:42.045344] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:34.827 [2024-07-25 12:02:42.045362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:34.827 [2024-07-25 12:02:42.045423] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xaecd70 00:10:34.827 [2024-07-25 12:02:42.045430] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.827 [2024-07-25 12:02:42.045550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9577c0 00:10:34.827 [2024-07-25 12:02:42.045632] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xaecd70 00:10:34.827 [2024-07-25 12:02:42.045638] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xaecd70 00:10:34.827 [2024-07-25 12:02:42.045710] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.827 pt2 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:34.827 12:02:42 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:35.085 12:02:42 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:35.085 "name": "raid_bdev1", 00:10:35.085 "uuid": "e2718b13-f859-4d9a-8ff6-40e93e256dbc", 00:10:35.085 "strip_size_kb": 0, 00:10:35.085 "state": "online", 00:10:35.085 "raid_level": "raid1", 00:10:35.085 "superblock": true, 00:10:35.085 "num_base_bdevs": 2, 00:10:35.085 "num_base_bdevs_discovered": 1, 00:10:35.085 "num_base_bdevs_operational": 1, 00:10:35.085 "base_bdevs_list": [ 00:10:35.085 { 00:10:35.086 "name": null, 00:10:35.086 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.086 "is_configured": false, 00:10:35.086 "data_offset": 2048, 00:10:35.086 "data_size": 63488 00:10:35.086 }, 00:10:35.086 { 00:10:35.086 "name": "pt2", 00:10:35.086 "uuid": "88a9f326-81ca-56d4-9b88-9ac5d97ee978", 00:10:35.086 "is_configured": true, 00:10:35.086 "data_offset": 2048, 00:10:35.086 "data_size": 63488 00:10:35.086 } 00:10:35.086 ] 00:10:35.086 }' 00:10:35.086 12:02:42 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:35.086 12:02:42 -- common/autotest_common.sh@10 -- # set +x 00:10:35.650 12:02:42 -- bdev/bdev_raid.sh@468 -- # '[' 2 -gt 2 ']' 00:10:35.650 12:02:42 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:10:35.650 12:02:42 -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:35.650 [2024-07-25 12:02:42.886334] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:35.650 12:02:42 -- bdev/bdev_raid.sh@506 -- # '[' e2718b13-f859-4d9a-8ff6-40e93e256dbc '!=' e2718b13-f859-4d9a-8ff6-40e93e256dbc ']' 00:10:35.650 12:02:42 -- bdev/bdev_raid.sh@511 -- # killprocess 1225125 00:10:35.650 12:02:42 -- common/autotest_common.sh@926 -- # '[' -z 1225125 ']' 00:10:35.650 12:02:42 -- common/autotest_common.sh@930 -- # kill -0 1225125 00:10:35.650 12:02:42 -- common/autotest_common.sh@931 -- # uname 00:10:35.650 12:02:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:35.650 12:02:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1225125 00:10:35.650 12:02:42 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:35.650 12:02:42 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:35.650 12:02:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1225125' 00:10:35.650 killing process with pid 1225125 00:10:35.650 12:02:42 -- common/autotest_common.sh@945 -- # kill 1225125 00:10:35.650 [2024-07-25 12:02:42.956182] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:35.650 [2024-07-25 12:02:42.956230] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.650 [2024-07-25 12:02:42.956262] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.650 [2024-07-25 12:02:42.956270] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xaecd70 name raid_bdev1, state offline 00:10:35.650 12:02:42 -- common/autotest_common.sh@950 -- # wait 1225125 00:10:35.909 [2024-07-25 12:02:42.971805] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:35.909 12:02:43 -- bdev/bdev_raid.sh@513 -- # return 0 00:10:35.909 00:10:35.909 real 0m8.160s 00:10:35.909 user 0m14.324s 00:10:35.909 sys 0m1.664s 00:10:35.909 12:02:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:35.909 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:10:35.909 ************************************ 00:10:35.909 END TEST raid_superblock_test 00:10:35.909 ************************************ 00:10:35.909 12:02:43 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:10:35.909 12:02:43 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:10:35.909 12:02:43 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:10:35.909 12:02:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:35.909 12:02:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:35.909 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:10:35.909 ************************************ 00:10:35.909 START TEST raid_state_function_test 00:10:35.909 ************************************ 00:10:35.909 12:02:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 false 00:10:35.909 12:02:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:36.167 12:02:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=1226410 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1226410' 00:10:36.168 Process raid pid: 1226410 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:36.168 12:02:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1226410 /var/tmp/spdk-raid.sock 00:10:36.168 12:02:43 -- common/autotest_common.sh@819 -- # '[' -z 1226410 ']' 00:10:36.168 12:02:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:36.168 12:02:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:36.168 12:02:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:36.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:36.168 12:02:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:36.168 12:02:43 -- common/autotest_common.sh@10 -- # set +x 00:10:36.168 [2024-07-25 12:02:43.275543] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:36.168 [2024-07-25 12:02:43.275593] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:36.168 [2024-07-25 12:02:43.364018] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:36.168 [2024-07-25 12:02:43.452643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:36.426 [2024-07-25 12:02:43.511596] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.426 [2024-07-25 12:02:43.511622] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:36.993 12:02:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:36.993 12:02:44 -- common/autotest_common.sh@852 -- # return 0 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:36.993 [2024-07-25 12:02:44.211165] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.993 [2024-07-25 12:02:44.211198] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.993 [2024-07-25 12:02:44.211204] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:36.993 [2024-07-25 12:02:44.211212] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:36.993 [2024-07-25 12:02:44.211217] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:36.993 [2024-07-25 12:02:44.211225] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.993 12:02:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.251 12:02:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:37.251 "name": "Existed_Raid", 00:10:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.251 "strip_size_kb": 64, 00:10:37.251 "state": "configuring", 00:10:37.251 "raid_level": "raid0", 00:10:37.251 "superblock": false, 00:10:37.251 "num_base_bdevs": 3, 00:10:37.251 "num_base_bdevs_discovered": 0, 00:10:37.251 "num_base_bdevs_operational": 3, 00:10:37.251 "base_bdevs_list": [ 00:10:37.251 { 00:10:37.251 "name": "BaseBdev1", 00:10:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.251 "is_configured": false, 00:10:37.251 "data_offset": 0, 00:10:37.251 "data_size": 0 00:10:37.251 }, 00:10:37.251 { 00:10:37.251 "name": "BaseBdev2", 00:10:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.251 "is_configured": false, 00:10:37.251 "data_offset": 0, 00:10:37.251 "data_size": 0 00:10:37.251 }, 00:10:37.251 { 00:10:37.251 "name": "BaseBdev3", 00:10:37.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.251 "is_configured": false, 00:10:37.251 "data_offset": 0, 00:10:37.251 "data_size": 0 00:10:37.251 } 00:10:37.251 ] 00:10:37.251 }' 00:10:37.251 12:02:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:37.251 12:02:44 -- common/autotest_common.sh@10 -- # set +x 00:10:37.818 12:02:44 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:37.818 [2024-07-25 12:02:45.009142] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:37.818 [2024-07-25 12:02:45.009162] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2644d60 name Existed_Raid, state configuring 00:10:37.818 12:02:45 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:38.076 [2024-07-25 12:02:45.185609] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.076 [2024-07-25 12:02:45.185633] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.076 [2024-07-25 12:02:45.185640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:38.076 [2024-07-25 12:02:45.185648] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:38.076 [2024-07-25 12:02:45.185653] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:38.076 [2024-07-25 12:02:45.185661] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:38.076 12:02:45 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:38.076 [2024-07-25 12:02:45.374580] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:38.076 BaseBdev1 00:10:38.350 12:02:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:38.350 12:02:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:38.350 12:02:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:38.350 12:02:45 -- common/autotest_common.sh@889 -- # local i 00:10:38.350 12:02:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:38.350 12:02:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:38.350 12:02:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:38.350 12:02:45 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.622 [ 00:10:38.622 { 00:10:38.622 "name": "BaseBdev1", 00:10:38.622 "aliases": [ 00:10:38.622 "06776769-d1b2-4fb5-b69d-1a646b2eb277" 00:10:38.622 ], 00:10:38.622 "product_name": "Malloc disk", 00:10:38.622 "block_size": 512, 00:10:38.622 "num_blocks": 65536, 00:10:38.622 "uuid": "06776769-d1b2-4fb5-b69d-1a646b2eb277", 00:10:38.622 "assigned_rate_limits": { 00:10:38.622 "rw_ios_per_sec": 0, 00:10:38.622 "rw_mbytes_per_sec": 0, 00:10:38.622 "r_mbytes_per_sec": 0, 00:10:38.622 "w_mbytes_per_sec": 0 00:10:38.622 }, 00:10:38.622 "claimed": true, 00:10:38.622 "claim_type": "exclusive_write", 00:10:38.622 "zoned": false, 00:10:38.622 "supported_io_types": { 00:10:38.622 "read": true, 00:10:38.622 "write": true, 00:10:38.622 "unmap": true, 00:10:38.622 "write_zeroes": true, 00:10:38.622 "flush": true, 00:10:38.622 "reset": true, 00:10:38.622 "compare": false, 00:10:38.622 "compare_and_write": false, 00:10:38.622 "abort": true, 00:10:38.622 "nvme_admin": false, 00:10:38.622 "nvme_io": false 00:10:38.622 }, 00:10:38.622 "memory_domains": [ 00:10:38.622 { 00:10:38.622 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.622 "dma_device_type": 2 00:10:38.622 } 00:10:38.622 ], 00:10:38.622 "driver_specific": {} 00:10:38.622 } 00:10:38.622 ] 00:10:38.622 12:02:45 -- common/autotest_common.sh@895 -- # return 0 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:38.622 "name": "Existed_Raid", 00:10:38.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.622 "strip_size_kb": 64, 00:10:38.622 "state": "configuring", 00:10:38.622 "raid_level": "raid0", 00:10:38.622 "superblock": false, 00:10:38.622 "num_base_bdevs": 3, 00:10:38.622 "num_base_bdevs_discovered": 1, 00:10:38.622 "num_base_bdevs_operational": 3, 00:10:38.622 "base_bdevs_list": [ 00:10:38.622 { 00:10:38.622 "name": "BaseBdev1", 00:10:38.622 "uuid": "06776769-d1b2-4fb5-b69d-1a646b2eb277", 00:10:38.622 "is_configured": true, 00:10:38.622 "data_offset": 0, 00:10:38.622 "data_size": 65536 00:10:38.622 }, 00:10:38.622 { 00:10:38.622 "name": "BaseBdev2", 00:10:38.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.622 "is_configured": false, 00:10:38.622 "data_offset": 0, 00:10:38.622 "data_size": 0 00:10:38.622 }, 00:10:38.622 { 00:10:38.622 "name": "BaseBdev3", 00:10:38.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.622 "is_configured": false, 00:10:38.622 "data_offset": 0, 00:10:38.622 "data_size": 0 00:10:38.622 } 00:10:38.622 ] 00:10:38.622 }' 00:10:38.622 12:02:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:38.622 12:02:45 -- common/autotest_common.sh@10 -- # set +x 00:10:39.190 12:02:46 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:39.448 [2024-07-25 12:02:46.513510] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:39.448 [2024-07-25 12:02:46.513541] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2644630 name Existed_Raid, state configuring 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:39.448 [2024-07-25 12:02:46.681958] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.448 [2024-07-25 12:02:46.682978] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:39.448 [2024-07-25 12:02:46.683002] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:39.448 [2024-07-25 12:02:46.683008] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:39.448 [2024-07-25 12:02:46.683016] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.448 12:02:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.707 12:02:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:39.707 "name": "Existed_Raid", 00:10:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.707 "strip_size_kb": 64, 00:10:39.707 "state": "configuring", 00:10:39.707 "raid_level": "raid0", 00:10:39.707 "superblock": false, 00:10:39.707 "num_base_bdevs": 3, 00:10:39.707 "num_base_bdevs_discovered": 1, 00:10:39.707 "num_base_bdevs_operational": 3, 00:10:39.707 "base_bdevs_list": [ 00:10:39.707 { 00:10:39.707 "name": "BaseBdev1", 00:10:39.707 "uuid": "06776769-d1b2-4fb5-b69d-1a646b2eb277", 00:10:39.707 "is_configured": true, 00:10:39.707 "data_offset": 0, 00:10:39.707 "data_size": 65536 00:10:39.707 }, 00:10:39.707 { 00:10:39.707 "name": "BaseBdev2", 00:10:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.707 "is_configured": false, 00:10:39.707 "data_offset": 0, 00:10:39.707 "data_size": 0 00:10:39.707 }, 00:10:39.707 { 00:10:39.707 "name": "BaseBdev3", 00:10:39.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.707 "is_configured": false, 00:10:39.707 "data_offset": 0, 00:10:39.707 "data_size": 0 00:10:39.707 } 00:10:39.707 ] 00:10:39.707 }' 00:10:39.707 12:02:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:39.707 12:02:46 -- common/autotest_common.sh@10 -- # set +x 00:10:40.274 12:02:47 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:40.274 [2024-07-25 12:02:47.518977] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:40.274 BaseBdev2 00:10:40.274 12:02:47 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:40.274 12:02:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:40.274 12:02:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:40.274 12:02:47 -- common/autotest_common.sh@889 -- # local i 00:10:40.274 12:02:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:40.274 12:02:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:40.274 12:02:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:40.533 12:02:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:40.792 [ 00:10:40.792 { 00:10:40.792 "name": "BaseBdev2", 00:10:40.792 "aliases": [ 00:10:40.792 "9cd54503-72f4-447d-8bad-eca8f7ba02f9" 00:10:40.792 ], 00:10:40.792 "product_name": "Malloc disk", 00:10:40.792 "block_size": 512, 00:10:40.792 "num_blocks": 65536, 00:10:40.792 "uuid": "9cd54503-72f4-447d-8bad-eca8f7ba02f9", 00:10:40.792 "assigned_rate_limits": { 00:10:40.792 "rw_ios_per_sec": 0, 00:10:40.792 "rw_mbytes_per_sec": 0, 00:10:40.792 "r_mbytes_per_sec": 0, 00:10:40.792 "w_mbytes_per_sec": 0 00:10:40.792 }, 00:10:40.792 "claimed": true, 00:10:40.792 "claim_type": "exclusive_write", 00:10:40.792 "zoned": false, 00:10:40.792 "supported_io_types": { 00:10:40.792 "read": true, 00:10:40.792 "write": true, 00:10:40.792 "unmap": true, 00:10:40.792 "write_zeroes": true, 00:10:40.792 "flush": true, 00:10:40.792 "reset": true, 00:10:40.792 "compare": false, 00:10:40.792 "compare_and_write": false, 00:10:40.792 "abort": true, 00:10:40.792 "nvme_admin": false, 00:10:40.792 "nvme_io": false 00:10:40.792 }, 00:10:40.792 "memory_domains": [ 00:10:40.792 { 00:10:40.792 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.792 "dma_device_type": 2 00:10:40.792 } 00:10:40.792 ], 00:10:40.792 "driver_specific": {} 00:10:40.792 } 00:10:40.792 ] 00:10:40.792 12:02:47 -- common/autotest_common.sh@895 -- # return 0 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.792 12:02:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.792 12:02:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:40.792 "name": "Existed_Raid", 00:10:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.792 "strip_size_kb": 64, 00:10:40.792 "state": "configuring", 00:10:40.792 "raid_level": "raid0", 00:10:40.792 "superblock": false, 00:10:40.792 "num_base_bdevs": 3, 00:10:40.792 "num_base_bdevs_discovered": 2, 00:10:40.792 "num_base_bdevs_operational": 3, 00:10:40.792 "base_bdevs_list": [ 00:10:40.792 { 00:10:40.792 "name": "BaseBdev1", 00:10:40.792 "uuid": "06776769-d1b2-4fb5-b69d-1a646b2eb277", 00:10:40.792 "is_configured": true, 00:10:40.792 "data_offset": 0, 00:10:40.792 "data_size": 65536 00:10:40.792 }, 00:10:40.792 { 00:10:40.792 "name": "BaseBdev2", 00:10:40.792 "uuid": "9cd54503-72f4-447d-8bad-eca8f7ba02f9", 00:10:40.792 "is_configured": true, 00:10:40.792 "data_offset": 0, 00:10:40.792 "data_size": 65536 00:10:40.792 }, 00:10:40.792 { 00:10:40.792 "name": "BaseBdev3", 00:10:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.792 "is_configured": false, 00:10:40.792 "data_offset": 0, 00:10:40.792 "data_size": 0 00:10:40.792 } 00:10:40.792 ] 00:10:40.792 }' 00:10:40.792 12:02:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:40.792 12:02:48 -- common/autotest_common.sh@10 -- # set +x 00:10:41.359 12:02:48 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:41.617 [2024-07-25 12:02:48.680822] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:41.617 [2024-07-25 12:02:48.680858] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x2645630 00:10:41.617 [2024-07-25 12:02:48.680864] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:41.617 [2024-07-25 12:02:48.681045] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2649f30 00:10:41.617 [2024-07-25 12:02:48.681129] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2645630 00:10:41.617 [2024-07-25 12:02:48.681135] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2645630 00:10:41.617 [2024-07-25 12:02:48.681254] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:41.617 BaseBdev3 00:10:41.617 12:02:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:41.617 12:02:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:10:41.617 12:02:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:41.617 12:02:48 -- common/autotest_common.sh@889 -- # local i 00:10:41.617 12:02:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:41.617 12:02:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:41.617 12:02:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:41.617 12:02:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:41.875 [ 00:10:41.875 { 00:10:41.875 "name": "BaseBdev3", 00:10:41.875 "aliases": [ 00:10:41.875 "60b60390-0428-46fd-819c-29122e2bd21e" 00:10:41.875 ], 00:10:41.875 "product_name": "Malloc disk", 00:10:41.875 "block_size": 512, 00:10:41.875 "num_blocks": 65536, 00:10:41.875 "uuid": "60b60390-0428-46fd-819c-29122e2bd21e", 00:10:41.875 "assigned_rate_limits": { 00:10:41.875 "rw_ios_per_sec": 0, 00:10:41.875 "rw_mbytes_per_sec": 0, 00:10:41.875 "r_mbytes_per_sec": 0, 00:10:41.875 "w_mbytes_per_sec": 0 00:10:41.875 }, 00:10:41.875 "claimed": true, 00:10:41.875 "claim_type": "exclusive_write", 00:10:41.875 "zoned": false, 00:10:41.875 "supported_io_types": { 00:10:41.875 "read": true, 00:10:41.875 "write": true, 00:10:41.875 "unmap": true, 00:10:41.875 "write_zeroes": true, 00:10:41.875 "flush": true, 00:10:41.875 "reset": true, 00:10:41.875 "compare": false, 00:10:41.875 "compare_and_write": false, 00:10:41.875 "abort": true, 00:10:41.875 "nvme_admin": false, 00:10:41.875 "nvme_io": false 00:10:41.875 }, 00:10:41.875 "memory_domains": [ 00:10:41.875 { 00:10:41.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:41.875 "dma_device_type": 2 00:10:41.875 } 00:10:41.875 ], 00:10:41.875 "driver_specific": {} 00:10:41.875 } 00:10:41.875 ] 00:10:41.875 12:02:49 -- common/autotest_common.sh@895 -- # return 0 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.875 12:02:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.134 12:02:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:42.134 "name": "Existed_Raid", 00:10:42.134 "uuid": "81aa31c9-256b-4028-a0ba-ffa312bd0da9", 00:10:42.134 "strip_size_kb": 64, 00:10:42.134 "state": "online", 00:10:42.134 "raid_level": "raid0", 00:10:42.134 "superblock": false, 00:10:42.134 "num_base_bdevs": 3, 00:10:42.134 "num_base_bdevs_discovered": 3, 00:10:42.134 "num_base_bdevs_operational": 3, 00:10:42.134 "base_bdevs_list": [ 00:10:42.134 { 00:10:42.134 "name": "BaseBdev1", 00:10:42.134 "uuid": "06776769-d1b2-4fb5-b69d-1a646b2eb277", 00:10:42.134 "is_configured": true, 00:10:42.134 "data_offset": 0, 00:10:42.134 "data_size": 65536 00:10:42.134 }, 00:10:42.134 { 00:10:42.134 "name": "BaseBdev2", 00:10:42.134 "uuid": "9cd54503-72f4-447d-8bad-eca8f7ba02f9", 00:10:42.134 "is_configured": true, 00:10:42.134 "data_offset": 0, 00:10:42.134 "data_size": 65536 00:10:42.134 }, 00:10:42.134 { 00:10:42.134 "name": "BaseBdev3", 00:10:42.134 "uuid": "60b60390-0428-46fd-819c-29122e2bd21e", 00:10:42.134 "is_configured": true, 00:10:42.134 "data_offset": 0, 00:10:42.134 "data_size": 65536 00:10:42.134 } 00:10:42.134 ] 00:10:42.134 }' 00:10:42.134 12:02:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:42.134 12:02:49 -- common/autotest_common.sh@10 -- # set +x 00:10:42.391 12:02:49 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:42.648 [2024-07-25 12:02:49.831993] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:42.648 [2024-07-25 12:02:49.832017] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:42.648 [2024-07-25 12:02:49.832045] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:42.648 12:02:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:42.648 12:02:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:42.648 12:02:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:42.648 12:02:49 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.649 12:02:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.906 12:02:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:42.906 "name": "Existed_Raid", 00:10:42.906 "uuid": "81aa31c9-256b-4028-a0ba-ffa312bd0da9", 00:10:42.906 "strip_size_kb": 64, 00:10:42.906 "state": "offline", 00:10:42.906 "raid_level": "raid0", 00:10:42.906 "superblock": false, 00:10:42.906 "num_base_bdevs": 3, 00:10:42.906 "num_base_bdevs_discovered": 2, 00:10:42.906 "num_base_bdevs_operational": 2, 00:10:42.906 "base_bdevs_list": [ 00:10:42.906 { 00:10:42.906 "name": null, 00:10:42.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.906 "is_configured": false, 00:10:42.906 "data_offset": 0, 00:10:42.906 "data_size": 65536 00:10:42.906 }, 00:10:42.906 { 00:10:42.906 "name": "BaseBdev2", 00:10:42.906 "uuid": "9cd54503-72f4-447d-8bad-eca8f7ba02f9", 00:10:42.906 "is_configured": true, 00:10:42.906 "data_offset": 0, 00:10:42.906 "data_size": 65536 00:10:42.906 }, 00:10:42.906 { 00:10:42.906 "name": "BaseBdev3", 00:10:42.906 "uuid": "60b60390-0428-46fd-819c-29122e2bd21e", 00:10:42.906 "is_configured": true, 00:10:42.906 "data_offset": 0, 00:10:42.906 "data_size": 65536 00:10:42.906 } 00:10:42.906 ] 00:10:42.906 }' 00:10:42.906 12:02:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:42.906 12:02:50 -- common/autotest_common.sh@10 -- # set +x 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.473 12:02:50 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:43.732 [2024-07-25 12:02:50.827428] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:43.732 12:02:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:43.732 12:02:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:43.732 12:02:50 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.732 12:02:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:43.732 12:02:51 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:43.732 12:02:51 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:43.732 12:02:51 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:43.991 [2024-07-25 12:02:51.184194] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:43.991 [2024-07-25 12:02:51.184228] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2645630 name Existed_Raid, state offline 00:10:43.991 12:02:51 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:43.991 12:02:51 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:43.991 12:02:51 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.991 12:02:51 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:44.249 12:02:51 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:44.249 12:02:51 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:44.249 12:02:51 -- bdev/bdev_raid.sh@287 -- # killprocess 1226410 00:10:44.249 12:02:51 -- common/autotest_common.sh@926 -- # '[' -z 1226410 ']' 00:10:44.249 12:02:51 -- common/autotest_common.sh@930 -- # kill -0 1226410 00:10:44.249 12:02:51 -- common/autotest_common.sh@931 -- # uname 00:10:44.249 12:02:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:44.249 12:02:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1226410 00:10:44.249 12:02:51 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:44.249 12:02:51 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:44.249 12:02:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1226410' 00:10:44.249 killing process with pid 1226410 00:10:44.249 12:02:51 -- common/autotest_common.sh@945 -- # kill 1226410 00:10:44.249 [2024-07-25 12:02:51.415648] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:44.249 12:02:51 -- common/autotest_common.sh@950 -- # wait 1226410 00:10:44.249 [2024-07-25 12:02:51.416456] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:44.508 00:10:44.508 real 0m8.411s 00:10:44.508 user 0m14.734s 00:10:44.508 sys 0m1.664s 00:10:44.508 12:02:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.508 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:44.508 ************************************ 00:10:44.508 END TEST raid_state_function_test 00:10:44.508 ************************************ 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:44.508 12:02:51 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:10:44.508 12:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.508 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:44.508 ************************************ 00:10:44.508 START TEST raid_state_function_test_sb 00:10:44.508 ************************************ 00:10:44.508 12:02:51 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 3 true 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@226 -- # raid_pid=1227725 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1227725' 00:10:44.508 Process raid pid: 1227725 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:44.508 12:02:51 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1227725 /var/tmp/spdk-raid.sock 00:10:44.508 12:02:51 -- common/autotest_common.sh@819 -- # '[' -z 1227725 ']' 00:10:44.508 12:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:44.508 12:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:44.508 12:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:44.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:44.508 12:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:44.508 12:02:51 -- common/autotest_common.sh@10 -- # set +x 00:10:44.508 [2024-07-25 12:02:51.735751] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:44.508 [2024-07-25 12:02:51.735803] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.766 [2024-07-25 12:02:51.825344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:44.766 [2024-07-25 12:02:51.915211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.766 [2024-07-25 12:02:51.975686] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:44.766 [2024-07-25 12:02:51.975711] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.334 12:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:45.334 12:02:52 -- common/autotest_common.sh@852 -- # return 0 00:10:45.334 12:02:52 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:45.592 [2024-07-25 12:02:52.699752] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:45.592 [2024-07-25 12:02:52.699784] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:45.592 [2024-07-25 12:02:52.699791] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:45.592 [2024-07-25 12:02:52.699799] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:45.592 [2024-07-25 12:02:52.699804] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:45.592 [2024-07-25 12:02:52.699811] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:45.592 "name": "Existed_Raid", 00:10:45.592 "uuid": "5cfd6caa-6d6d-4c3d-ba5a-0dd8905359d4", 00:10:45.592 "strip_size_kb": 64, 00:10:45.592 "state": "configuring", 00:10:45.592 "raid_level": "raid0", 00:10:45.592 "superblock": true, 00:10:45.592 "num_base_bdevs": 3, 00:10:45.592 "num_base_bdevs_discovered": 0, 00:10:45.592 "num_base_bdevs_operational": 3, 00:10:45.592 "base_bdevs_list": [ 00:10:45.592 { 00:10:45.592 "name": "BaseBdev1", 00:10:45.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.592 "is_configured": false, 00:10:45.592 "data_offset": 0, 00:10:45.592 "data_size": 0 00:10:45.592 }, 00:10:45.592 { 00:10:45.592 "name": "BaseBdev2", 00:10:45.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.592 "is_configured": false, 00:10:45.592 "data_offset": 0, 00:10:45.592 "data_size": 0 00:10:45.592 }, 00:10:45.592 { 00:10:45.592 "name": "BaseBdev3", 00:10:45.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:45.592 "is_configured": false, 00:10:45.592 "data_offset": 0, 00:10:45.592 "data_size": 0 00:10:45.592 } 00:10:45.592 ] 00:10:45.592 }' 00:10:45.592 12:02:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:45.592 12:02:52 -- common/autotest_common.sh@10 -- # set +x 00:10:46.159 12:02:53 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:46.417 [2024-07-25 12:02:53.501737] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.417 [2024-07-25 12:02:53.501757] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1822d60 name Existed_Raid, state configuring 00:10:46.417 12:02:53 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:46.417 [2024-07-25 12:02:53.670195] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.417 [2024-07-25 12:02:53.670216] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.417 [2024-07-25 12:02:53.670221] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.417 [2024-07-25 12:02:53.670228] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.417 [2024-07-25 12:02:53.670233] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.417 [2024-07-25 12:02:53.670240] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.417 12:02:53 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.675 [2024-07-25 12:02:53.851119] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.675 BaseBdev1 00:10:46.675 12:02:53 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:10:46.675 12:02:53 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:46.675 12:02:53 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:46.675 12:02:53 -- common/autotest_common.sh@889 -- # local i 00:10:46.675 12:02:53 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:46.675 12:02:53 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:46.675 12:02:53 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:46.933 12:02:54 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.933 [ 00:10:46.933 { 00:10:46.933 "name": "BaseBdev1", 00:10:46.933 "aliases": [ 00:10:46.933 "2b6d3a05-53e3-495d-bb6c-9ebf23dfab0f" 00:10:46.933 ], 00:10:46.933 "product_name": "Malloc disk", 00:10:46.933 "block_size": 512, 00:10:46.933 "num_blocks": 65536, 00:10:46.933 "uuid": "2b6d3a05-53e3-495d-bb6c-9ebf23dfab0f", 00:10:46.933 "assigned_rate_limits": { 00:10:46.933 "rw_ios_per_sec": 0, 00:10:46.933 "rw_mbytes_per_sec": 0, 00:10:46.933 "r_mbytes_per_sec": 0, 00:10:46.933 "w_mbytes_per_sec": 0 00:10:46.933 }, 00:10:46.933 "claimed": true, 00:10:46.933 "claim_type": "exclusive_write", 00:10:46.933 "zoned": false, 00:10:46.933 "supported_io_types": { 00:10:46.933 "read": true, 00:10:46.933 "write": true, 00:10:46.933 "unmap": true, 00:10:46.933 "write_zeroes": true, 00:10:46.933 "flush": true, 00:10:46.933 "reset": true, 00:10:46.933 "compare": false, 00:10:46.933 "compare_and_write": false, 00:10:46.933 "abort": true, 00:10:46.933 "nvme_admin": false, 00:10:46.933 "nvme_io": false 00:10:46.933 }, 00:10:46.933 "memory_domains": [ 00:10:46.933 { 00:10:46.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.933 "dma_device_type": 2 00:10:46.933 } 00:10:46.933 ], 00:10:46.933 "driver_specific": {} 00:10:46.933 } 00:10:46.933 ] 00:10:46.933 12:02:54 -- common/autotest_common.sh@895 -- # return 0 00:10:46.933 12:02:54 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:46.933 12:02:54 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:46.934 12:02:54 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.192 12:02:54 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:47.192 "name": "Existed_Raid", 00:10:47.192 "uuid": "11b4ddad-fff2-4906-8706-22d59948e0d3", 00:10:47.192 "strip_size_kb": 64, 00:10:47.192 "state": "configuring", 00:10:47.192 "raid_level": "raid0", 00:10:47.192 "superblock": true, 00:10:47.192 "num_base_bdevs": 3, 00:10:47.192 "num_base_bdevs_discovered": 1, 00:10:47.192 "num_base_bdevs_operational": 3, 00:10:47.192 "base_bdevs_list": [ 00:10:47.192 { 00:10:47.192 "name": "BaseBdev1", 00:10:47.192 "uuid": "2b6d3a05-53e3-495d-bb6c-9ebf23dfab0f", 00:10:47.192 "is_configured": true, 00:10:47.192 "data_offset": 2048, 00:10:47.192 "data_size": 63488 00:10:47.192 }, 00:10:47.192 { 00:10:47.192 "name": "BaseBdev2", 00:10:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.192 "is_configured": false, 00:10:47.192 "data_offset": 0, 00:10:47.192 "data_size": 0 00:10:47.192 }, 00:10:47.192 { 00:10:47.192 "name": "BaseBdev3", 00:10:47.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.192 "is_configured": false, 00:10:47.192 "data_offset": 0, 00:10:47.192 "data_size": 0 00:10:47.192 } 00:10:47.192 ] 00:10:47.192 }' 00:10:47.192 12:02:54 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:47.192 12:02:54 -- common/autotest_common.sh@10 -- # set +x 00:10:47.758 12:02:54 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:47.758 [2024-07-25 12:02:54.941918] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.758 [2024-07-25 12:02:54.941948] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1822630 name Existed_Raid, state configuring 00:10:47.758 12:02:54 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:10:47.758 12:02:54 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:48.016 12:02:55 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.016 BaseBdev1 00:10:48.016 12:02:55 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:10:48.016 12:02:55 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:10:48.016 12:02:55 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:48.016 12:02:55 -- common/autotest_common.sh@889 -- # local i 00:10:48.016 12:02:55 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:48.016 12:02:55 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:48.016 12:02:55 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:48.274 12:02:55 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:48.532 [ 00:10:48.532 { 00:10:48.532 "name": "BaseBdev1", 00:10:48.532 "aliases": [ 00:10:48.532 "0c31ab48-41e7-46a2-b434-9d38ed8c3138" 00:10:48.532 ], 00:10:48.532 "product_name": "Malloc disk", 00:10:48.532 "block_size": 512, 00:10:48.532 "num_blocks": 65536, 00:10:48.532 "uuid": "0c31ab48-41e7-46a2-b434-9d38ed8c3138", 00:10:48.532 "assigned_rate_limits": { 00:10:48.532 "rw_ios_per_sec": 0, 00:10:48.532 "rw_mbytes_per_sec": 0, 00:10:48.532 "r_mbytes_per_sec": 0, 00:10:48.532 "w_mbytes_per_sec": 0 00:10:48.532 }, 00:10:48.532 "claimed": false, 00:10:48.532 "zoned": false, 00:10:48.532 "supported_io_types": { 00:10:48.532 "read": true, 00:10:48.532 "write": true, 00:10:48.532 "unmap": true, 00:10:48.532 "write_zeroes": true, 00:10:48.532 "flush": true, 00:10:48.532 "reset": true, 00:10:48.532 "compare": false, 00:10:48.532 "compare_and_write": false, 00:10:48.532 "abort": true, 00:10:48.532 "nvme_admin": false, 00:10:48.532 "nvme_io": false 00:10:48.532 }, 00:10:48.532 "memory_domains": [ 00:10:48.532 { 00:10:48.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.532 "dma_device_type": 2 00:10:48.532 } 00:10:48.532 ], 00:10:48.532 "driver_specific": {} 00:10:48.532 } 00:10:48.532 ] 00:10:48.532 12:02:55 -- common/autotest_common.sh@895 -- # return 0 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:48.533 [2024-07-25 12:02:55.780751] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.533 [2024-07-25 12:02:55.781611] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.533 [2024-07-25 12:02:55.781634] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.533 [2024-07-25 12:02:55.781640] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.533 [2024-07-25 12:02:55.781667] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:48.533 12:02:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.791 12:02:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:48.791 "name": "Existed_Raid", 00:10:48.791 "uuid": "5e09b116-aaf1-469b-b70a-baf5e4c335c7", 00:10:48.791 "strip_size_kb": 64, 00:10:48.791 "state": "configuring", 00:10:48.791 "raid_level": "raid0", 00:10:48.791 "superblock": true, 00:10:48.791 "num_base_bdevs": 3, 00:10:48.791 "num_base_bdevs_discovered": 1, 00:10:48.791 "num_base_bdevs_operational": 3, 00:10:48.791 "base_bdevs_list": [ 00:10:48.791 { 00:10:48.791 "name": "BaseBdev1", 00:10:48.791 "uuid": "0c31ab48-41e7-46a2-b434-9d38ed8c3138", 00:10:48.791 "is_configured": true, 00:10:48.791 "data_offset": 2048, 00:10:48.791 "data_size": 63488 00:10:48.791 }, 00:10:48.791 { 00:10:48.791 "name": "BaseBdev2", 00:10:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.791 "is_configured": false, 00:10:48.791 "data_offset": 0, 00:10:48.791 "data_size": 0 00:10:48.791 }, 00:10:48.791 { 00:10:48.791 "name": "BaseBdev3", 00:10:48.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.791 "is_configured": false, 00:10:48.791 "data_offset": 0, 00:10:48.791 "data_size": 0 00:10:48.791 } 00:10:48.791 ] 00:10:48.791 }' 00:10:48.791 12:02:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:48.791 12:02:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.357 12:02:56 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:49.357 [2024-07-25 12:02:56.625689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:49.357 BaseBdev2 00:10:49.357 12:02:56 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:10:49.357 12:02:56 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:10:49.357 12:02:56 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:49.357 12:02:56 -- common/autotest_common.sh@889 -- # local i 00:10:49.357 12:02:56 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:49.357 12:02:56 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:49.357 12:02:56 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:49.615 12:02:56 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:49.874 [ 00:10:49.874 { 00:10:49.874 "name": "BaseBdev2", 00:10:49.874 "aliases": [ 00:10:49.874 "e15ce4ab-bccf-4110-b4a4-16b8ab976562" 00:10:49.874 ], 00:10:49.874 "product_name": "Malloc disk", 00:10:49.874 "block_size": 512, 00:10:49.874 "num_blocks": 65536, 00:10:49.874 "uuid": "e15ce4ab-bccf-4110-b4a4-16b8ab976562", 00:10:49.874 "assigned_rate_limits": { 00:10:49.874 "rw_ios_per_sec": 0, 00:10:49.874 "rw_mbytes_per_sec": 0, 00:10:49.874 "r_mbytes_per_sec": 0, 00:10:49.874 "w_mbytes_per_sec": 0 00:10:49.874 }, 00:10:49.874 "claimed": true, 00:10:49.874 "claim_type": "exclusive_write", 00:10:49.874 "zoned": false, 00:10:49.874 "supported_io_types": { 00:10:49.874 "read": true, 00:10:49.874 "write": true, 00:10:49.874 "unmap": true, 00:10:49.874 "write_zeroes": true, 00:10:49.874 "flush": true, 00:10:49.874 "reset": true, 00:10:49.874 "compare": false, 00:10:49.874 "compare_and_write": false, 00:10:49.874 "abort": true, 00:10:49.874 "nvme_admin": false, 00:10:49.874 "nvme_io": false 00:10:49.874 }, 00:10:49.874 "memory_domains": [ 00:10:49.874 { 00:10:49.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.874 "dma_device_type": 2 00:10:49.874 } 00:10:49.874 ], 00:10:49.874 "driver_specific": {} 00:10:49.874 } 00:10:49.874 ] 00:10:49.874 12:02:56 -- common/autotest_common.sh@895 -- # return 0 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.874 12:02:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.874 12:02:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:49.874 "name": "Existed_Raid", 00:10:49.874 "uuid": "5e09b116-aaf1-469b-b70a-baf5e4c335c7", 00:10:49.874 "strip_size_kb": 64, 00:10:49.874 "state": "configuring", 00:10:49.874 "raid_level": "raid0", 00:10:49.874 "superblock": true, 00:10:49.874 "num_base_bdevs": 3, 00:10:49.874 "num_base_bdevs_discovered": 2, 00:10:49.874 "num_base_bdevs_operational": 3, 00:10:49.874 "base_bdevs_list": [ 00:10:49.874 { 00:10:49.874 "name": "BaseBdev1", 00:10:49.874 "uuid": "0c31ab48-41e7-46a2-b434-9d38ed8c3138", 00:10:49.874 "is_configured": true, 00:10:49.874 "data_offset": 2048, 00:10:49.874 "data_size": 63488 00:10:49.874 }, 00:10:49.874 { 00:10:49.874 "name": "BaseBdev2", 00:10:49.874 "uuid": "e15ce4ab-bccf-4110-b4a4-16b8ab976562", 00:10:49.874 "is_configured": true, 00:10:49.874 "data_offset": 2048, 00:10:49.874 "data_size": 63488 00:10:49.874 }, 00:10:49.874 { 00:10:49.874 "name": "BaseBdev3", 00:10:49.874 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.874 "is_configured": false, 00:10:49.874 "data_offset": 0, 00:10:49.874 "data_size": 0 00:10:49.874 } 00:10:49.874 ] 00:10:49.874 }' 00:10:49.874 12:02:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:49.874 12:02:57 -- common/autotest_common.sh@10 -- # set +x 00:10:50.440 12:02:57 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.698 [2024-07-25 12:02:57.759451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.698 [2024-07-25 12:02:57.759568] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x19c3180 00:10:50.698 [2024-07-25 12:02:57.759578] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:50.698 [2024-07-25 12:02:57.759700] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19c38c0 00:10:50.698 [2024-07-25 12:02:57.759779] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19c3180 00:10:50.698 [2024-07-25 12:02:57.759786] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x19c3180 00:10:50.698 [2024-07-25 12:02:57.759850] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:50.698 BaseBdev3 00:10:50.698 12:02:57 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:10:50.698 12:02:57 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:10:50.698 12:02:57 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:10:50.698 12:02:57 -- common/autotest_common.sh@889 -- # local i 00:10:50.698 12:02:57 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:10:50.698 12:02:57 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:10:50.698 12:02:57 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:50.698 12:02:57 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.957 [ 00:10:50.957 { 00:10:50.957 "name": "BaseBdev3", 00:10:50.957 "aliases": [ 00:10:50.957 "8c0bcce2-b052-4089-b959-d29e6c7ababd" 00:10:50.957 ], 00:10:50.957 "product_name": "Malloc disk", 00:10:50.957 "block_size": 512, 00:10:50.957 "num_blocks": 65536, 00:10:50.957 "uuid": "8c0bcce2-b052-4089-b959-d29e6c7ababd", 00:10:50.957 "assigned_rate_limits": { 00:10:50.957 "rw_ios_per_sec": 0, 00:10:50.957 "rw_mbytes_per_sec": 0, 00:10:50.957 "r_mbytes_per_sec": 0, 00:10:50.957 "w_mbytes_per_sec": 0 00:10:50.957 }, 00:10:50.957 "claimed": true, 00:10:50.957 "claim_type": "exclusive_write", 00:10:50.957 "zoned": false, 00:10:50.957 "supported_io_types": { 00:10:50.957 "read": true, 00:10:50.957 "write": true, 00:10:50.957 "unmap": true, 00:10:50.957 "write_zeroes": true, 00:10:50.957 "flush": true, 00:10:50.957 "reset": true, 00:10:50.957 "compare": false, 00:10:50.957 "compare_and_write": false, 00:10:50.957 "abort": true, 00:10:50.957 "nvme_admin": false, 00:10:50.957 "nvme_io": false 00:10:50.957 }, 00:10:50.957 "memory_domains": [ 00:10:50.957 { 00:10:50.957 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.957 "dma_device_type": 2 00:10:50.957 } 00:10:50.957 ], 00:10:50.957 "driver_specific": {} 00:10:50.957 } 00:10:50.957 ] 00:10:50.957 12:02:58 -- common/autotest_common.sh@895 -- # return 0 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.957 12:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.217 12:02:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:51.217 "name": "Existed_Raid", 00:10:51.217 "uuid": "5e09b116-aaf1-469b-b70a-baf5e4c335c7", 00:10:51.217 "strip_size_kb": 64, 00:10:51.217 "state": "online", 00:10:51.217 "raid_level": "raid0", 00:10:51.217 "superblock": true, 00:10:51.217 "num_base_bdevs": 3, 00:10:51.217 "num_base_bdevs_discovered": 3, 00:10:51.217 "num_base_bdevs_operational": 3, 00:10:51.217 "base_bdevs_list": [ 00:10:51.217 { 00:10:51.217 "name": "BaseBdev1", 00:10:51.217 "uuid": "0c31ab48-41e7-46a2-b434-9d38ed8c3138", 00:10:51.217 "is_configured": true, 00:10:51.217 "data_offset": 2048, 00:10:51.217 "data_size": 63488 00:10:51.217 }, 00:10:51.217 { 00:10:51.217 "name": "BaseBdev2", 00:10:51.217 "uuid": "e15ce4ab-bccf-4110-b4a4-16b8ab976562", 00:10:51.217 "is_configured": true, 00:10:51.217 "data_offset": 2048, 00:10:51.217 "data_size": 63488 00:10:51.217 }, 00:10:51.217 { 00:10:51.217 "name": "BaseBdev3", 00:10:51.217 "uuid": "8c0bcce2-b052-4089-b959-d29e6c7ababd", 00:10:51.217 "is_configured": true, 00:10:51.217 "data_offset": 2048, 00:10:51.217 "data_size": 63488 00:10:51.217 } 00:10:51.217 ] 00:10:51.217 }' 00:10:51.217 12:02:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:51.217 12:02:58 -- common/autotest_common.sh@10 -- # set +x 00:10:51.475 12:02:58 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:51.734 [2024-07-25 12:02:58.894418] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:51.734 [2024-07-25 12:02:58.894441] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:51.734 [2024-07-25 12:02:58.894470] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@197 -- # return 1 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.734 12:02:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.992 12:02:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:51.992 "name": "Existed_Raid", 00:10:51.992 "uuid": "5e09b116-aaf1-469b-b70a-baf5e4c335c7", 00:10:51.992 "strip_size_kb": 64, 00:10:51.992 "state": "offline", 00:10:51.992 "raid_level": "raid0", 00:10:51.992 "superblock": true, 00:10:51.992 "num_base_bdevs": 3, 00:10:51.992 "num_base_bdevs_discovered": 2, 00:10:51.992 "num_base_bdevs_operational": 2, 00:10:51.992 "base_bdevs_list": [ 00:10:51.992 { 00:10:51.992 "name": null, 00:10:51.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.992 "is_configured": false, 00:10:51.992 "data_offset": 2048, 00:10:51.992 "data_size": 63488 00:10:51.992 }, 00:10:51.992 { 00:10:51.992 "name": "BaseBdev2", 00:10:51.992 "uuid": "e15ce4ab-bccf-4110-b4a4-16b8ab976562", 00:10:51.992 "is_configured": true, 00:10:51.992 "data_offset": 2048, 00:10:51.992 "data_size": 63488 00:10:51.992 }, 00:10:51.992 { 00:10:51.992 "name": "BaseBdev3", 00:10:51.992 "uuid": "8c0bcce2-b052-4089-b959-d29e6c7ababd", 00:10:51.992 "is_configured": true, 00:10:51.992 "data_offset": 2048, 00:10:51.992 "data_size": 63488 00:10:51.992 } 00:10:51.992 ] 00:10:51.992 }' 00:10:51.992 12:02:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:51.992 12:02:59 -- common/autotest_common.sh@10 -- # set +x 00:10:52.252 12:02:59 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:10:52.252 12:02:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:52.252 12:02:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:52.252 12:02:59 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.542 12:02:59 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:52.542 12:02:59 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.542 12:02:59 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:52.800 [2024-07-25 12:02:59.874602] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:52.800 12:02:59 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:52.800 12:02:59 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:52.800 12:02:59 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.800 12:02:59 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:10:52.800 12:03:00 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:10:52.800 12:03:00 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:52.800 12:03:00 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:53.059 [2024-07-25 12:03:00.222934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:53.059 [2024-07-25 12:03:00.222972] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19c3180 name Existed_Raid, state offline 00:10:53.059 12:03:00 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:10:53.059 12:03:00 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:10:53.059 12:03:00 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:53.059 12:03:00 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:10:53.317 12:03:00 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:10:53.317 12:03:00 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:10:53.317 12:03:00 -- bdev/bdev_raid.sh@287 -- # killprocess 1227725 00:10:53.317 12:03:00 -- common/autotest_common.sh@926 -- # '[' -z 1227725 ']' 00:10:53.317 12:03:00 -- common/autotest_common.sh@930 -- # kill -0 1227725 00:10:53.317 12:03:00 -- common/autotest_common.sh@931 -- # uname 00:10:53.317 12:03:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:53.317 12:03:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1227725 00:10:53.317 12:03:00 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:53.317 12:03:00 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:53.317 12:03:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1227725' 00:10:53.317 killing process with pid 1227725 00:10:53.317 12:03:00 -- common/autotest_common.sh@945 -- # kill 1227725 00:10:53.317 [2024-07-25 12:03:00.459480] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.317 12:03:00 -- common/autotest_common.sh@950 -- # wait 1227725 00:10:53.317 [2024-07-25 12:03:00.460387] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:53.575 12:03:00 -- bdev/bdev_raid.sh@289 -- # return 0 00:10:53.575 00:10:53.575 real 0m9.008s 00:10:53.575 user 0m15.727s 00:10:53.575 sys 0m1.762s 00:10:53.575 12:03:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:53.575 12:03:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.575 ************************************ 00:10:53.575 END TEST raid_state_function_test_sb 00:10:53.575 ************************************ 00:10:53.575 12:03:00 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:53.575 12:03:00 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:10:53.575 12:03:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:53.575 12:03:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.575 ************************************ 00:10:53.575 START TEST raid_superblock_test 00:10:53.575 ************************************ 00:10:53.575 12:03:00 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 3 00:10:53.575 12:03:00 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:10:53.575 12:03:00 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@357 -- # raid_pid=1229248 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1229248 /var/tmp/spdk-raid.sock 00:10:53.576 12:03:00 -- common/autotest_common.sh@819 -- # '[' -z 1229248 ']' 00:10:53.576 12:03:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:53.576 12:03:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:53.576 12:03:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:53.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:53.576 12:03:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:53.576 12:03:00 -- common/autotest_common.sh@10 -- # set +x 00:10:53.576 12:03:00 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:10:53.576 [2024-07-25 12:03:00.777564] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:10:53.576 [2024-07-25 12:03:00.777616] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1229248 ] 00:10:53.576 [2024-07-25 12:03:00.863653] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:53.834 [2024-07-25 12:03:00.951317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.834 [2024-07-25 12:03:01.012405] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:53.834 [2024-07-25 12:03:01.012437] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:54.402 12:03:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:54.402 12:03:01 -- common/autotest_common.sh@852 -- # return 0 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.402 12:03:01 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:54.402 malloc1 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:54.661 [2024-07-25 12:03:01.855295] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:54.661 [2024-07-25 12:03:01.855338] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.661 [2024-07-25 12:03:01.855371] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x8538d0 00:10:54.661 [2024-07-25 12:03:01.855380] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.661 [2024-07-25 12:03:01.856729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.661 [2024-07-25 12:03:01.856753] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:54.661 pt1 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.661 12:03:01 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:54.920 malloc2 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:54.920 [2024-07-25 12:03:02.189230] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:54.920 [2024-07-25 12:03:02.189278] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:54.920 [2024-07-25 12:03:02.189293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9fb1a0 00:10:54.920 [2024-07-25 12:03:02.189301] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:54.920 [2024-07-25 12:03:02.190490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:54.920 [2024-07-25 12:03:02.190512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:54.920 pt2 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:54.920 12:03:02 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:55.179 malloc3 00:10:55.179 12:03:02 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:55.437 [2024-07-25 12:03:02.513805] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:55.437 [2024-07-25 12:03:02.513845] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:55.437 [2024-07-25 12:03:02.513875] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9fb700 00:10:55.437 [2024-07-25 12:03:02.513883] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:55.437 [2024-07-25 12:03:02.515061] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:55.438 [2024-07-25 12:03:02.515087] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:55.438 pt3 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:55.438 [2024-07-25 12:03:02.674255] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:55.438 [2024-07-25 12:03:02.675284] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:55.438 [2024-07-25 12:03:02.675325] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:55.438 [2024-07-25 12:03:02.675456] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x9fecf0 00:10:55.438 [2024-07-25 12:03:02.675464] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:55.438 [2024-07-25 12:03:02.675608] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x9fa210 00:10:55.438 [2024-07-25 12:03:02.675703] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9fecf0 00:10:55.438 [2024-07-25 12:03:02.675709] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9fecf0 00:10:55.438 [2024-07-25 12:03:02.675779] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.438 12:03:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:55.696 12:03:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:55.696 "name": "raid_bdev1", 00:10:55.696 "uuid": "9120b40f-c8b4-4700-aaea-f8276bbf6059", 00:10:55.697 "strip_size_kb": 64, 00:10:55.697 "state": "online", 00:10:55.697 "raid_level": "raid0", 00:10:55.697 "superblock": true, 00:10:55.697 "num_base_bdevs": 3, 00:10:55.697 "num_base_bdevs_discovered": 3, 00:10:55.697 "num_base_bdevs_operational": 3, 00:10:55.697 "base_bdevs_list": [ 00:10:55.697 { 00:10:55.697 "name": "pt1", 00:10:55.697 "uuid": "622616a7-b42a-59cd-b180-e22a4bc7818c", 00:10:55.697 "is_configured": true, 00:10:55.697 "data_offset": 2048, 00:10:55.697 "data_size": 63488 00:10:55.697 }, 00:10:55.697 { 00:10:55.697 "name": "pt2", 00:10:55.697 "uuid": "4c4d0965-d841-5a4a-9d05-565535616f8b", 00:10:55.697 "is_configured": true, 00:10:55.697 "data_offset": 2048, 00:10:55.697 "data_size": 63488 00:10:55.697 }, 00:10:55.697 { 00:10:55.697 "name": "pt3", 00:10:55.697 "uuid": "46b24395-c6bf-5e4e-afb0-488ab8aedda5", 00:10:55.697 "is_configured": true, 00:10:55.697 "data_offset": 2048, 00:10:55.697 "data_size": 63488 00:10:55.697 } 00:10:55.697 ] 00:10:55.697 }' 00:10:55.697 12:03:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:55.697 12:03:02 -- common/autotest_common.sh@10 -- # set +x 00:10:56.269 12:03:03 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:56.269 12:03:03 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:10:56.269 [2024-07-25 12:03:03.472424] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:56.269 12:03:03 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=9120b40f-c8b4-4700-aaea-f8276bbf6059 00:10:56.269 12:03:03 -- bdev/bdev_raid.sh@380 -- # '[' -z 9120b40f-c8b4-4700-aaea-f8276bbf6059 ']' 00:10:56.269 12:03:03 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:56.542 [2024-07-25 12:03:03.648720] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:56.542 [2024-07-25 12:03:03.648741] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:56.542 [2024-07-25 12:03:03.648774] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:56.542 [2024-07-25 12:03:03.648812] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:56.542 [2024-07-25 12:03:03.648820] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9fecf0 name raid_bdev1, state offline 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:56.542 12:03:03 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:56.800 12:03:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:56.800 12:03:04 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:57.058 12:03:04 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:10:57.058 12:03:04 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:57.058 12:03:04 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:57.058 12:03:04 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:57.316 12:03:04 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:10:57.316 12:03:04 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:57.316 12:03:04 -- common/autotest_common.sh@640 -- # local es=0 00:10:57.316 12:03:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:57.316 12:03:04 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:57.317 12:03:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:57.317 12:03:04 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:57.317 12:03:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:57.317 12:03:04 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:57.317 12:03:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:10:57.317 12:03:04 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:10:57.317 12:03:04 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:10:57.317 12:03:04 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:57.575 [2024-07-25 12:03:04.663320] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:57.575 [2024-07-25 12:03:04.664341] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:57.575 [2024-07-25 12:03:04.664371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:57.575 [2024-07-25 12:03:04.664404] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:10:57.575 [2024-07-25 12:03:04.664433] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:10:57.575 [2024-07-25 12:03:04.664447] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:10:57.575 [2024-07-25 12:03:04.664464] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:57.575 [2024-07-25 12:03:04.664470] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8541c0 name raid_bdev1, state configuring 00:10:57.575 request: 00:10:57.575 { 00:10:57.575 "name": "raid_bdev1", 00:10:57.575 "raid_level": "raid0", 00:10:57.575 "base_bdevs": [ 00:10:57.575 "malloc1", 00:10:57.575 "malloc2", 00:10:57.575 "malloc3" 00:10:57.575 ], 00:10:57.575 "superblock": false, 00:10:57.575 "strip_size_kb": 64, 00:10:57.575 "method": "bdev_raid_create", 00:10:57.575 "req_id": 1 00:10:57.575 } 00:10:57.575 Got JSON-RPC error response 00:10:57.575 response: 00:10:57.575 { 00:10:57.575 "code": -17, 00:10:57.575 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:57.575 } 00:10:57.575 12:03:04 -- common/autotest_common.sh@643 -- # es=1 00:10:57.575 12:03:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:10:57.575 12:03:04 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:10:57.575 12:03:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:10:57.575 12:03:04 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:57.575 12:03:04 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:10:57.575 12:03:04 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:10:57.575 12:03:04 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:10:57.575 12:03:04 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:57.834 [2024-07-25 12:03:04.992135] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:57.834 [2024-07-25 12:03:04.992174] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:57.834 [2024-07-25 12:03:04.992205] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9fc7c0 00:10:57.834 [2024-07-25 12:03:04.992214] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:57.834 [2024-07-25 12:03:04.993421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:57.834 [2024-07-25 12:03:04.993443] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:57.834 [2024-07-25 12:03:04.993493] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:10:57.834 [2024-07-25 12:03:04.993511] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:57.834 pt1 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:57.834 12:03:05 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.092 12:03:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:58.092 "name": "raid_bdev1", 00:10:58.092 "uuid": "9120b40f-c8b4-4700-aaea-f8276bbf6059", 00:10:58.092 "strip_size_kb": 64, 00:10:58.092 "state": "configuring", 00:10:58.092 "raid_level": "raid0", 00:10:58.092 "superblock": true, 00:10:58.092 "num_base_bdevs": 3, 00:10:58.092 "num_base_bdevs_discovered": 1, 00:10:58.092 "num_base_bdevs_operational": 3, 00:10:58.092 "base_bdevs_list": [ 00:10:58.092 { 00:10:58.092 "name": "pt1", 00:10:58.092 "uuid": "622616a7-b42a-59cd-b180-e22a4bc7818c", 00:10:58.092 "is_configured": true, 00:10:58.092 "data_offset": 2048, 00:10:58.092 "data_size": 63488 00:10:58.092 }, 00:10:58.092 { 00:10:58.092 "name": null, 00:10:58.092 "uuid": "4c4d0965-d841-5a4a-9d05-565535616f8b", 00:10:58.092 "is_configured": false, 00:10:58.092 "data_offset": 2048, 00:10:58.092 "data_size": 63488 00:10:58.092 }, 00:10:58.092 { 00:10:58.092 "name": null, 00:10:58.093 "uuid": "46b24395-c6bf-5e4e-afb0-488ab8aedda5", 00:10:58.093 "is_configured": false, 00:10:58.093 "data_offset": 2048, 00:10:58.093 "data_size": 63488 00:10:58.093 } 00:10:58.093 ] 00:10:58.093 }' 00:10:58.093 12:03:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:58.093 12:03:05 -- common/autotest_common.sh@10 -- # set +x 00:10:58.659 12:03:05 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:10:58.659 12:03:05 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.659 [2024-07-25 12:03:05.830303] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.659 [2024-07-25 12:03:05.830343] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.659 [2024-07-25 12:03:05.830376] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9fd770 00:10:58.660 [2024-07-25 12:03:05.830385] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.660 [2024-07-25 12:03:05.830644] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.660 [2024-07-25 12:03:05.830655] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.660 [2024-07-25 12:03:05.830703] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:58.660 [2024-07-25 12:03:05.830717] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.660 pt2 00:10:58.660 12:03:05 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:58.918 [2024-07-25 12:03:05.986711] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:10:58.918 "name": "raid_bdev1", 00:10:58.918 "uuid": "9120b40f-c8b4-4700-aaea-f8276bbf6059", 00:10:58.918 "strip_size_kb": 64, 00:10:58.918 "state": "configuring", 00:10:58.918 "raid_level": "raid0", 00:10:58.918 "superblock": true, 00:10:58.918 "num_base_bdevs": 3, 00:10:58.918 "num_base_bdevs_discovered": 1, 00:10:58.918 "num_base_bdevs_operational": 3, 00:10:58.918 "base_bdevs_list": [ 00:10:58.918 { 00:10:58.918 "name": "pt1", 00:10:58.918 "uuid": "622616a7-b42a-59cd-b180-e22a4bc7818c", 00:10:58.918 "is_configured": true, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 }, 00:10:58.918 { 00:10:58.918 "name": null, 00:10:58.918 "uuid": "4c4d0965-d841-5a4a-9d05-565535616f8b", 00:10:58.918 "is_configured": false, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 }, 00:10:58.918 { 00:10:58.918 "name": null, 00:10:58.918 "uuid": "46b24395-c6bf-5e4e-afb0-488ab8aedda5", 00:10:58.918 "is_configured": false, 00:10:58.918 "data_offset": 2048, 00:10:58.918 "data_size": 63488 00:10:58.918 } 00:10:58.918 ] 00:10:58.918 }' 00:10:58.918 12:03:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:10:58.918 12:03:06 -- common/autotest_common.sh@10 -- # set +x 00:10:59.486 12:03:06 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:10:59.486 12:03:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:59.486 12:03:06 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:59.744 [2024-07-25 12:03:06.832907] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:59.744 [2024-07-25 12:03:06.832946] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.744 [2024-07-25 12:03:06.832962] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9f9ed0 00:10:59.744 [2024-07-25 12:03:06.832970] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.744 [2024-07-25 12:03:06.833230] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.744 [2024-07-25 12:03:06.833245] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:59.744 [2024-07-25 12:03:06.833297] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:10:59.744 [2024-07-25 12:03:06.833312] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:59.744 pt2 00:10:59.744 12:03:06 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:59.744 12:03:06 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:59.744 12:03:06 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:59.744 [2024-07-25 12:03:07.013364] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:59.744 [2024-07-25 12:03:07.013387] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.744 [2024-07-25 12:03:07.013398] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9ff300 00:10:59.744 [2024-07-25 12:03:07.013406] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.744 [2024-07-25 12:03:07.013594] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.744 [2024-07-25 12:03:07.013605] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:59.744 [2024-07-25 12:03:07.013638] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:10:59.744 [2024-07-25 12:03:07.013649] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:59.744 [2024-07-25 12:03:07.013715] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x9fa670 00:10:59.744 [2024-07-25 12:03:07.013721] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:59.744 [2024-07-25 12:03:07.013831] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa007c0 00:10:59.744 [2024-07-25 12:03:07.013910] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x9fa670 00:10:59.744 [2024-07-25 12:03:07.013916] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x9fa670 00:10:59.744 [2024-07-25 12:03:07.013977] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:59.744 pt3 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.744 12:03:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.002 12:03:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:00.002 "name": "raid_bdev1", 00:11:00.003 "uuid": "9120b40f-c8b4-4700-aaea-f8276bbf6059", 00:11:00.003 "strip_size_kb": 64, 00:11:00.003 "state": "online", 00:11:00.003 "raid_level": "raid0", 00:11:00.003 "superblock": true, 00:11:00.003 "num_base_bdevs": 3, 00:11:00.003 "num_base_bdevs_discovered": 3, 00:11:00.003 "num_base_bdevs_operational": 3, 00:11:00.003 "base_bdevs_list": [ 00:11:00.003 { 00:11:00.003 "name": "pt1", 00:11:00.003 "uuid": "622616a7-b42a-59cd-b180-e22a4bc7818c", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 }, 00:11:00.003 { 00:11:00.003 "name": "pt2", 00:11:00.003 "uuid": "4c4d0965-d841-5a4a-9d05-565535616f8b", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 }, 00:11:00.003 { 00:11:00.003 "name": "pt3", 00:11:00.003 "uuid": "46b24395-c6bf-5e4e-afb0-488ab8aedda5", 00:11:00.003 "is_configured": true, 00:11:00.003 "data_offset": 2048, 00:11:00.003 "data_size": 63488 00:11:00.003 } 00:11:00.003 ] 00:11:00.003 }' 00:11:00.003 12:03:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:00.003 12:03:07 -- common/autotest_common.sh@10 -- # set +x 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:00.569 [2024-07-25 12:03:07.835756] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@430 -- # '[' 9120b40f-c8b4-4700-aaea-f8276bbf6059 '!=' 9120b40f-c8b4-4700-aaea-f8276bbf6059 ']' 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:00.569 12:03:07 -- bdev/bdev_raid.sh@511 -- # killprocess 1229248 00:11:00.569 12:03:07 -- common/autotest_common.sh@926 -- # '[' -z 1229248 ']' 00:11:00.569 12:03:07 -- common/autotest_common.sh@930 -- # kill -0 1229248 00:11:00.569 12:03:07 -- common/autotest_common.sh@931 -- # uname 00:11:00.569 12:03:07 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:00.569 12:03:07 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1229248 00:11:00.827 12:03:07 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:00.827 12:03:07 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:00.827 12:03:07 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1229248' 00:11:00.827 killing process with pid 1229248 00:11:00.827 12:03:07 -- common/autotest_common.sh@945 -- # kill 1229248 00:11:00.827 [2024-07-25 12:03:07.901804] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:00.827 12:03:07 -- common/autotest_common.sh@950 -- # wait 1229248 00:11:00.827 [2024-07-25 12:03:07.901846] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:00.827 [2024-07-25 12:03:07.901884] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:00.827 [2024-07-25 12:03:07.901892] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x9fa670 name raid_bdev1, state offline 00:11:00.827 [2024-07-25 12:03:07.927528] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:01.086 00:11:01.086 real 0m7.422s 00:11:01.086 user 0m12.843s 00:11:01.086 sys 0m1.485s 00:11:01.086 12:03:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:01.086 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:11:01.086 ************************************ 00:11:01.086 END TEST raid_superblock_test 00:11:01.086 ************************************ 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:11:01.086 12:03:08 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:01.086 12:03:08 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:01.086 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:11:01.086 ************************************ 00:11:01.086 START TEST raid_state_function_test 00:11:01.086 ************************************ 00:11:01.086 12:03:08 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 false 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@226 -- # raid_pid=1230577 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1230577' 00:11:01.086 Process raid pid: 1230577 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1230577 /var/tmp/spdk-raid.sock 00:11:01.086 12:03:08 -- common/autotest_common.sh@819 -- # '[' -z 1230577 ']' 00:11:01.086 12:03:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:01.086 12:03:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:01.086 12:03:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:01.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:01.086 12:03:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:01.086 12:03:08 -- common/autotest_common.sh@10 -- # set +x 00:11:01.086 12:03:08 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:01.086 [2024-07-25 12:03:08.244770] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:01.086 [2024-07-25 12:03:08.244827] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:01.086 [2024-07-25 12:03:08.339394] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:01.345 [2024-07-25 12:03:08.432789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.345 [2024-07-25 12:03:08.495366] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.345 [2024-07-25 12:03:08.495392] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.911 12:03:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:01.911 12:03:09 -- common/autotest_common.sh@852 -- # return 0 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:01.911 [2024-07-25 12:03:09.198850] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.911 [2024-07-25 12:03:09.198881] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.911 [2024-07-25 12:03:09.198887] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.911 [2024-07-25 12:03:09.198895] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.911 [2024-07-25 12:03:09.198918] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.911 [2024-07-25 12:03:09.198925] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.911 12:03:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.169 12:03:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:02.169 "name": "Existed_Raid", 00:11:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.169 "strip_size_kb": 64, 00:11:02.169 "state": "configuring", 00:11:02.169 "raid_level": "concat", 00:11:02.169 "superblock": false, 00:11:02.169 "num_base_bdevs": 3, 00:11:02.169 "num_base_bdevs_discovered": 0, 00:11:02.169 "num_base_bdevs_operational": 3, 00:11:02.169 "base_bdevs_list": [ 00:11:02.169 { 00:11:02.169 "name": "BaseBdev1", 00:11:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.169 "is_configured": false, 00:11:02.169 "data_offset": 0, 00:11:02.169 "data_size": 0 00:11:02.169 }, 00:11:02.169 { 00:11:02.169 "name": "BaseBdev2", 00:11:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.169 "is_configured": false, 00:11:02.169 "data_offset": 0, 00:11:02.169 "data_size": 0 00:11:02.169 }, 00:11:02.169 { 00:11:02.169 "name": "BaseBdev3", 00:11:02.169 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.169 "is_configured": false, 00:11:02.169 "data_offset": 0, 00:11:02.169 "data_size": 0 00:11:02.169 } 00:11:02.169 ] 00:11:02.169 }' 00:11:02.169 12:03:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:02.169 12:03:09 -- common/autotest_common.sh@10 -- # set +x 00:11:02.736 12:03:09 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:02.736 [2024-07-25 12:03:10.036938] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.736 [2024-07-25 12:03:10.036961] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf30d60 name Existed_Raid, state configuring 00:11:02.995 12:03:10 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:02.995 [2024-07-25 12:03:10.205384] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:02.995 [2024-07-25 12:03:10.205409] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:02.995 [2024-07-25 12:03:10.205415] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.995 [2024-07-25 12:03:10.205423] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.995 [2024-07-25 12:03:10.205429] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.995 [2024-07-25 12:03:10.205436] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.995 12:03:10 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:03.254 [2024-07-25 12:03:10.390513] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:03.254 BaseBdev1 00:11:03.254 12:03:10 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:03.254 12:03:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:03.254 12:03:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:03.254 12:03:10 -- common/autotest_common.sh@889 -- # local i 00:11:03.254 12:03:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:03.254 12:03:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:03.254 12:03:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:03.513 12:03:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:03.513 [ 00:11:03.513 { 00:11:03.513 "name": "BaseBdev1", 00:11:03.513 "aliases": [ 00:11:03.513 "3f47c2df-f499-4269-b3bd-8337b179e4d6" 00:11:03.513 ], 00:11:03.513 "product_name": "Malloc disk", 00:11:03.513 "block_size": 512, 00:11:03.513 "num_blocks": 65536, 00:11:03.513 "uuid": "3f47c2df-f499-4269-b3bd-8337b179e4d6", 00:11:03.513 "assigned_rate_limits": { 00:11:03.513 "rw_ios_per_sec": 0, 00:11:03.513 "rw_mbytes_per_sec": 0, 00:11:03.513 "r_mbytes_per_sec": 0, 00:11:03.513 "w_mbytes_per_sec": 0 00:11:03.513 }, 00:11:03.513 "claimed": true, 00:11:03.513 "claim_type": "exclusive_write", 00:11:03.513 "zoned": false, 00:11:03.513 "supported_io_types": { 00:11:03.513 "read": true, 00:11:03.513 "write": true, 00:11:03.513 "unmap": true, 00:11:03.513 "write_zeroes": true, 00:11:03.513 "flush": true, 00:11:03.513 "reset": true, 00:11:03.513 "compare": false, 00:11:03.513 "compare_and_write": false, 00:11:03.513 "abort": true, 00:11:03.513 "nvme_admin": false, 00:11:03.513 "nvme_io": false 00:11:03.513 }, 00:11:03.513 "memory_domains": [ 00:11:03.513 { 00:11:03.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.513 "dma_device_type": 2 00:11:03.513 } 00:11:03.513 ], 00:11:03.513 "driver_specific": {} 00:11:03.513 } 00:11:03.513 ] 00:11:03.513 12:03:10 -- common/autotest_common.sh@895 -- # return 0 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.513 12:03:10 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.772 12:03:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:03.772 "name": "Existed_Raid", 00:11:03.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.772 "strip_size_kb": 64, 00:11:03.772 "state": "configuring", 00:11:03.772 "raid_level": "concat", 00:11:03.772 "superblock": false, 00:11:03.772 "num_base_bdevs": 3, 00:11:03.772 "num_base_bdevs_discovered": 1, 00:11:03.772 "num_base_bdevs_operational": 3, 00:11:03.772 "base_bdevs_list": [ 00:11:03.772 { 00:11:03.772 "name": "BaseBdev1", 00:11:03.772 "uuid": "3f47c2df-f499-4269-b3bd-8337b179e4d6", 00:11:03.772 "is_configured": true, 00:11:03.772 "data_offset": 0, 00:11:03.772 "data_size": 65536 00:11:03.772 }, 00:11:03.772 { 00:11:03.772 "name": "BaseBdev2", 00:11:03.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.772 "is_configured": false, 00:11:03.772 "data_offset": 0, 00:11:03.772 "data_size": 0 00:11:03.772 }, 00:11:03.772 { 00:11:03.772 "name": "BaseBdev3", 00:11:03.772 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.772 "is_configured": false, 00:11:03.772 "data_offset": 0, 00:11:03.772 "data_size": 0 00:11:03.772 } 00:11:03.772 ] 00:11:03.772 }' 00:11:03.772 12:03:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:03.772 12:03:10 -- common/autotest_common.sh@10 -- # set +x 00:11:04.339 12:03:11 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:04.339 [2024-07-25 12:03:11.557550] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:04.339 [2024-07-25 12:03:11.557579] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf30630 name Existed_Raid, state configuring 00:11:04.339 12:03:11 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:04.339 12:03:11 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:04.597 [2024-07-25 12:03:11.726002] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.597 [2024-07-25 12:03:11.727062] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:04.597 [2024-07-25 12:03:11.727087] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:04.597 [2024-07-25 12:03:11.727094] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:04.598 [2024-07-25 12:03:11.727101] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.598 12:03:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.857 12:03:11 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:04.857 "name": "Existed_Raid", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.857 "strip_size_kb": 64, 00:11:04.857 "state": "configuring", 00:11:04.857 "raid_level": "concat", 00:11:04.857 "superblock": false, 00:11:04.857 "num_base_bdevs": 3, 00:11:04.857 "num_base_bdevs_discovered": 1, 00:11:04.857 "num_base_bdevs_operational": 3, 00:11:04.857 "base_bdevs_list": [ 00:11:04.857 { 00:11:04.857 "name": "BaseBdev1", 00:11:04.857 "uuid": "3f47c2df-f499-4269-b3bd-8337b179e4d6", 00:11:04.857 "is_configured": true, 00:11:04.857 "data_offset": 0, 00:11:04.857 "data_size": 65536 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "name": "BaseBdev2", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.857 "is_configured": false, 00:11:04.857 "data_offset": 0, 00:11:04.857 "data_size": 0 00:11:04.857 }, 00:11:04.857 { 00:11:04.857 "name": "BaseBdev3", 00:11:04.857 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.857 "is_configured": false, 00:11:04.857 "data_offset": 0, 00:11:04.857 "data_size": 0 00:11:04.857 } 00:11:04.857 ] 00:11:04.857 }' 00:11:04.857 12:03:11 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:04.857 12:03:11 -- common/autotest_common.sh@10 -- # set +x 00:11:05.115 12:03:12 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.374 [2024-07-25 12:03:12.576186] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.374 BaseBdev2 00:11:05.374 12:03:12 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:05.374 12:03:12 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:05.374 12:03:12 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:05.374 12:03:12 -- common/autotest_common.sh@889 -- # local i 00:11:05.374 12:03:12 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:05.374 12:03:12 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:05.374 12:03:12 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:05.633 12:03:12 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.633 [ 00:11:05.633 { 00:11:05.633 "name": "BaseBdev2", 00:11:05.633 "aliases": [ 00:11:05.633 "da80b5d2-1c12-4c2c-ad95-19a541087e86" 00:11:05.633 ], 00:11:05.633 "product_name": "Malloc disk", 00:11:05.633 "block_size": 512, 00:11:05.633 "num_blocks": 65536, 00:11:05.633 "uuid": "da80b5d2-1c12-4c2c-ad95-19a541087e86", 00:11:05.633 "assigned_rate_limits": { 00:11:05.633 "rw_ios_per_sec": 0, 00:11:05.633 "rw_mbytes_per_sec": 0, 00:11:05.633 "r_mbytes_per_sec": 0, 00:11:05.633 "w_mbytes_per_sec": 0 00:11:05.633 }, 00:11:05.633 "claimed": true, 00:11:05.633 "claim_type": "exclusive_write", 00:11:05.633 "zoned": false, 00:11:05.633 "supported_io_types": { 00:11:05.633 "read": true, 00:11:05.633 "write": true, 00:11:05.633 "unmap": true, 00:11:05.633 "write_zeroes": true, 00:11:05.633 "flush": true, 00:11:05.633 "reset": true, 00:11:05.633 "compare": false, 00:11:05.633 "compare_and_write": false, 00:11:05.633 "abort": true, 00:11:05.633 "nvme_admin": false, 00:11:05.633 "nvme_io": false 00:11:05.633 }, 00:11:05.633 "memory_domains": [ 00:11:05.633 { 00:11:05.633 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.633 "dma_device_type": 2 00:11:05.633 } 00:11:05.633 ], 00:11:05.633 "driver_specific": {} 00:11:05.633 } 00:11:05.633 ] 00:11:05.633 12:03:12 -- common/autotest_common.sh@895 -- # return 0 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.634 12:03:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.892 12:03:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:05.892 "name": "Existed_Raid", 00:11:05.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.892 "strip_size_kb": 64, 00:11:05.892 "state": "configuring", 00:11:05.892 "raid_level": "concat", 00:11:05.892 "superblock": false, 00:11:05.892 "num_base_bdevs": 3, 00:11:05.892 "num_base_bdevs_discovered": 2, 00:11:05.892 "num_base_bdevs_operational": 3, 00:11:05.892 "base_bdevs_list": [ 00:11:05.892 { 00:11:05.892 "name": "BaseBdev1", 00:11:05.892 "uuid": "3f47c2df-f499-4269-b3bd-8337b179e4d6", 00:11:05.892 "is_configured": true, 00:11:05.892 "data_offset": 0, 00:11:05.892 "data_size": 65536 00:11:05.892 }, 00:11:05.892 { 00:11:05.892 "name": "BaseBdev2", 00:11:05.892 "uuid": "da80b5d2-1c12-4c2c-ad95-19a541087e86", 00:11:05.892 "is_configured": true, 00:11:05.892 "data_offset": 0, 00:11:05.892 "data_size": 65536 00:11:05.892 }, 00:11:05.892 { 00:11:05.892 "name": "BaseBdev3", 00:11:05.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.892 "is_configured": false, 00:11:05.892 "data_offset": 0, 00:11:05.892 "data_size": 0 00:11:05.892 } 00:11:05.892 ] 00:11:05.892 }' 00:11:05.892 12:03:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:05.892 12:03:13 -- common/autotest_common.sh@10 -- # set +x 00:11:06.461 12:03:13 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:06.461 [2024-07-25 12:03:13.738143] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:06.461 [2024-07-25 12:03:13.738172] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xf31630 00:11:06.461 [2024-07-25 12:03:13.738178] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:11:06.461 [2024-07-25 12:03:13.738358] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf35f30 00:11:06.461 [2024-07-25 12:03:13.738441] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf31630 00:11:06.461 [2024-07-25 12:03:13.738447] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xf31630 00:11:06.461 [2024-07-25 12:03:13.738565] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.461 BaseBdev3 00:11:06.461 12:03:13 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:06.461 12:03:13 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:06.461 12:03:13 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:06.461 12:03:13 -- common/autotest_common.sh@889 -- # local i 00:11:06.462 12:03:13 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:06.462 12:03:13 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:06.462 12:03:13 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:06.784 12:03:13 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:06.784 [ 00:11:06.784 { 00:11:06.784 "name": "BaseBdev3", 00:11:06.784 "aliases": [ 00:11:06.784 "639fdd44-1f07-4bfd-b10b-cc43103e0179" 00:11:06.784 ], 00:11:06.784 "product_name": "Malloc disk", 00:11:06.784 "block_size": 512, 00:11:06.784 "num_blocks": 65536, 00:11:06.784 "uuid": "639fdd44-1f07-4bfd-b10b-cc43103e0179", 00:11:06.784 "assigned_rate_limits": { 00:11:06.784 "rw_ios_per_sec": 0, 00:11:06.784 "rw_mbytes_per_sec": 0, 00:11:06.784 "r_mbytes_per_sec": 0, 00:11:06.784 "w_mbytes_per_sec": 0 00:11:06.784 }, 00:11:06.784 "claimed": true, 00:11:06.784 "claim_type": "exclusive_write", 00:11:06.784 "zoned": false, 00:11:06.784 "supported_io_types": { 00:11:06.784 "read": true, 00:11:06.784 "write": true, 00:11:06.784 "unmap": true, 00:11:06.784 "write_zeroes": true, 00:11:06.784 "flush": true, 00:11:06.784 "reset": true, 00:11:06.784 "compare": false, 00:11:06.784 "compare_and_write": false, 00:11:06.784 "abort": true, 00:11:06.784 "nvme_admin": false, 00:11:06.784 "nvme_io": false 00:11:06.784 }, 00:11:06.784 "memory_domains": [ 00:11:06.784 { 00:11:06.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.784 "dma_device_type": 2 00:11:06.784 } 00:11:06.784 ], 00:11:06.784 "driver_specific": {} 00:11:06.784 } 00:11:06.784 ] 00:11:07.043 12:03:14 -- common/autotest_common.sh@895 -- # return 0 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:07.043 "name": "Existed_Raid", 00:11:07.043 "uuid": "446f5dc4-66ee-4600-b403-5a6e2f7ab3d3", 00:11:07.043 "strip_size_kb": 64, 00:11:07.043 "state": "online", 00:11:07.043 "raid_level": "concat", 00:11:07.043 "superblock": false, 00:11:07.043 "num_base_bdevs": 3, 00:11:07.043 "num_base_bdevs_discovered": 3, 00:11:07.043 "num_base_bdevs_operational": 3, 00:11:07.043 "base_bdevs_list": [ 00:11:07.043 { 00:11:07.043 "name": "BaseBdev1", 00:11:07.043 "uuid": "3f47c2df-f499-4269-b3bd-8337b179e4d6", 00:11:07.043 "is_configured": true, 00:11:07.043 "data_offset": 0, 00:11:07.043 "data_size": 65536 00:11:07.043 }, 00:11:07.043 { 00:11:07.043 "name": "BaseBdev2", 00:11:07.043 "uuid": "da80b5d2-1c12-4c2c-ad95-19a541087e86", 00:11:07.043 "is_configured": true, 00:11:07.043 "data_offset": 0, 00:11:07.043 "data_size": 65536 00:11:07.043 }, 00:11:07.043 { 00:11:07.043 "name": "BaseBdev3", 00:11:07.043 "uuid": "639fdd44-1f07-4bfd-b10b-cc43103e0179", 00:11:07.043 "is_configured": true, 00:11:07.043 "data_offset": 0, 00:11:07.043 "data_size": 65536 00:11:07.043 } 00:11:07.043 ] 00:11:07.043 }' 00:11:07.043 12:03:14 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:07.043 12:03:14 -- common/autotest_common.sh@10 -- # set +x 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:07.611 [2024-07-25 12:03:14.877136] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:07.611 [2024-07-25 12:03:14.877160] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:07.611 [2024-07-25 12:03:14.877189] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:07.611 12:03:14 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.870 12:03:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:07.870 "name": "Existed_Raid", 00:11:07.870 "uuid": "446f5dc4-66ee-4600-b403-5a6e2f7ab3d3", 00:11:07.870 "strip_size_kb": 64, 00:11:07.870 "state": "offline", 00:11:07.870 "raid_level": "concat", 00:11:07.870 "superblock": false, 00:11:07.870 "num_base_bdevs": 3, 00:11:07.870 "num_base_bdevs_discovered": 2, 00:11:07.870 "num_base_bdevs_operational": 2, 00:11:07.870 "base_bdevs_list": [ 00:11:07.870 { 00:11:07.870 "name": null, 00:11:07.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.870 "is_configured": false, 00:11:07.870 "data_offset": 0, 00:11:07.870 "data_size": 65536 00:11:07.870 }, 00:11:07.870 { 00:11:07.870 "name": "BaseBdev2", 00:11:07.870 "uuid": "da80b5d2-1c12-4c2c-ad95-19a541087e86", 00:11:07.870 "is_configured": true, 00:11:07.870 "data_offset": 0, 00:11:07.870 "data_size": 65536 00:11:07.870 }, 00:11:07.870 { 00:11:07.870 "name": "BaseBdev3", 00:11:07.870 "uuid": "639fdd44-1f07-4bfd-b10b-cc43103e0179", 00:11:07.870 "is_configured": true, 00:11:07.870 "data_offset": 0, 00:11:07.870 "data_size": 65536 00:11:07.870 } 00:11:07.870 ] 00:11:07.870 }' 00:11:07.870 12:03:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:07.870 12:03:15 -- common/autotest_common.sh@10 -- # set +x 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.437 12:03:15 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:08.696 [2024-07-25 12:03:15.880880] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:08.696 12:03:15 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:08.696 12:03:15 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:08.696 12:03:15 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:08.697 12:03:15 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:08.956 [2024-07-25 12:03:16.237136] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:08.956 [2024-07-25 12:03:16.237170] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf31630 name Existed_Raid, state offline 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:08.956 12:03:16 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:09.214 12:03:16 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:09.214 12:03:16 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:09.214 12:03:16 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:09.214 12:03:16 -- bdev/bdev_raid.sh@287 -- # killprocess 1230577 00:11:09.214 12:03:16 -- common/autotest_common.sh@926 -- # '[' -z 1230577 ']' 00:11:09.214 12:03:16 -- common/autotest_common.sh@930 -- # kill -0 1230577 00:11:09.214 12:03:16 -- common/autotest_common.sh@931 -- # uname 00:11:09.214 12:03:16 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:09.214 12:03:16 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1230577 00:11:09.214 12:03:16 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:09.214 12:03:16 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:09.214 12:03:16 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1230577' 00:11:09.214 killing process with pid 1230577 00:11:09.214 12:03:16 -- common/autotest_common.sh@945 -- # kill 1230577 00:11:09.214 [2024-07-25 12:03:16.482686] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:09.214 12:03:16 -- common/autotest_common.sh@950 -- # wait 1230577 00:11:09.214 [2024-07-25 12:03:16.483602] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:09.474 00:11:09.474 real 0m8.521s 00:11:09.474 user 0m14.872s 00:11:09.474 sys 0m1.705s 00:11:09.474 12:03:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.474 12:03:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 ************************************ 00:11:09.474 END TEST raid_state_function_test 00:11:09.474 ************************************ 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:11:09.474 12:03:16 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:09.474 12:03:16 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.474 12:03:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.474 ************************************ 00:11:09.474 START TEST raid_state_function_test_sb 00:11:09.474 ************************************ 00:11:09.474 12:03:16 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 3 true 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@226 -- # raid_pid=1232280 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1232280' 00:11:09.474 Process raid pid: 1232280 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:09.474 12:03:16 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1232280 /var/tmp/spdk-raid.sock 00:11:09.474 12:03:16 -- common/autotest_common.sh@819 -- # '[' -z 1232280 ']' 00:11:09.474 12:03:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:09.474 12:03:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.474 12:03:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:09.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:09.474 12:03:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.474 12:03:16 -- common/autotest_common.sh@10 -- # set +x 00:11:09.733 [2024-07-25 12:03:16.827400] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:09.733 [2024-07-25 12:03:16.827459] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.733 [2024-07-25 12:03:16.917384] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.733 [2024-07-25 12:03:17.001304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.992 [2024-07-25 12:03:17.061300] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.992 [2024-07-25 12:03:17.061328] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:10.560 12:03:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:10.560 12:03:17 -- common/autotest_common.sh@852 -- # return 0 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:10.560 [2024-07-25 12:03:17.776530] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:10.560 [2024-07-25 12:03:17.776563] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:10.560 [2024-07-25 12:03:17.776573] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:10.560 [2024-07-25 12:03:17.776580] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:10.560 [2024-07-25 12:03:17.776585] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:10.560 [2024-07-25 12:03:17.776592] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:10.560 12:03:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:10.819 12:03:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:10.819 "name": "Existed_Raid", 00:11:10.819 "uuid": "15e31acd-ebc1-452b-aa5f-84efbe2aa434", 00:11:10.819 "strip_size_kb": 64, 00:11:10.819 "state": "configuring", 00:11:10.819 "raid_level": "concat", 00:11:10.819 "superblock": true, 00:11:10.819 "num_base_bdevs": 3, 00:11:10.819 "num_base_bdevs_discovered": 0, 00:11:10.819 "num_base_bdevs_operational": 3, 00:11:10.819 "base_bdevs_list": [ 00:11:10.819 { 00:11:10.819 "name": "BaseBdev1", 00:11:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.819 "is_configured": false, 00:11:10.819 "data_offset": 0, 00:11:10.819 "data_size": 0 00:11:10.819 }, 00:11:10.819 { 00:11:10.819 "name": "BaseBdev2", 00:11:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.819 "is_configured": false, 00:11:10.819 "data_offset": 0, 00:11:10.819 "data_size": 0 00:11:10.819 }, 00:11:10.819 { 00:11:10.819 "name": "BaseBdev3", 00:11:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:10.819 "is_configured": false, 00:11:10.819 "data_offset": 0, 00:11:10.819 "data_size": 0 00:11:10.819 } 00:11:10.819 ] 00:11:10.819 }' 00:11:10.820 12:03:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:10.820 12:03:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.387 12:03:18 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:11.387 [2024-07-25 12:03:18.606555] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:11.387 [2024-07-25 12:03:18.606576] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13d0d60 name Existed_Raid, state configuring 00:11:11.387 12:03:18 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:11.645 [2024-07-25 12:03:18.783030] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:11.645 [2024-07-25 12:03:18.783050] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:11.645 [2024-07-25 12:03:18.783056] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:11.645 [2024-07-25 12:03:18.783063] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:11.645 [2024-07-25 12:03:18.783084] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:11.645 [2024-07-25 12:03:18.783091] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:11.645 12:03:18 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:11.903 [2024-07-25 12:03:18.960147] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:11.903 BaseBdev1 00:11:11.903 12:03:18 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:11.903 12:03:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:11.903 12:03:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:11.903 12:03:18 -- common/autotest_common.sh@889 -- # local i 00:11:11.903 12:03:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:11.903 12:03:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:11.903 12:03:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:11.903 12:03:19 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:12.161 [ 00:11:12.161 { 00:11:12.161 "name": "BaseBdev1", 00:11:12.161 "aliases": [ 00:11:12.161 "cf3409ab-7ff3-4f2b-8c38-6c12be141f08" 00:11:12.161 ], 00:11:12.161 "product_name": "Malloc disk", 00:11:12.161 "block_size": 512, 00:11:12.161 "num_blocks": 65536, 00:11:12.161 "uuid": "cf3409ab-7ff3-4f2b-8c38-6c12be141f08", 00:11:12.161 "assigned_rate_limits": { 00:11:12.161 "rw_ios_per_sec": 0, 00:11:12.161 "rw_mbytes_per_sec": 0, 00:11:12.161 "r_mbytes_per_sec": 0, 00:11:12.161 "w_mbytes_per_sec": 0 00:11:12.161 }, 00:11:12.161 "claimed": true, 00:11:12.161 "claim_type": "exclusive_write", 00:11:12.161 "zoned": false, 00:11:12.161 "supported_io_types": { 00:11:12.161 "read": true, 00:11:12.161 "write": true, 00:11:12.161 "unmap": true, 00:11:12.161 "write_zeroes": true, 00:11:12.161 "flush": true, 00:11:12.161 "reset": true, 00:11:12.161 "compare": false, 00:11:12.161 "compare_and_write": false, 00:11:12.161 "abort": true, 00:11:12.161 "nvme_admin": false, 00:11:12.161 "nvme_io": false 00:11:12.161 }, 00:11:12.161 "memory_domains": [ 00:11:12.161 { 00:11:12.161 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.161 "dma_device_type": 2 00:11:12.161 } 00:11:12.161 ], 00:11:12.161 "driver_specific": {} 00:11:12.161 } 00:11:12.161 ] 00:11:12.161 12:03:19 -- common/autotest_common.sh@895 -- # return 0 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:12.161 12:03:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:12.420 12:03:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:12.420 "name": "Existed_Raid", 00:11:12.420 "uuid": "bc4e79d6-4370-4845-bb65-93a12a1d8584", 00:11:12.420 "strip_size_kb": 64, 00:11:12.420 "state": "configuring", 00:11:12.420 "raid_level": "concat", 00:11:12.420 "superblock": true, 00:11:12.420 "num_base_bdevs": 3, 00:11:12.420 "num_base_bdevs_discovered": 1, 00:11:12.420 "num_base_bdevs_operational": 3, 00:11:12.420 "base_bdevs_list": [ 00:11:12.420 { 00:11:12.420 "name": "BaseBdev1", 00:11:12.420 "uuid": "cf3409ab-7ff3-4f2b-8c38-6c12be141f08", 00:11:12.420 "is_configured": true, 00:11:12.420 "data_offset": 2048, 00:11:12.420 "data_size": 63488 00:11:12.420 }, 00:11:12.420 { 00:11:12.420 "name": "BaseBdev2", 00:11:12.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.420 "is_configured": false, 00:11:12.420 "data_offset": 0, 00:11:12.420 "data_size": 0 00:11:12.420 }, 00:11:12.420 { 00:11:12.420 "name": "BaseBdev3", 00:11:12.420 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:12.420 "is_configured": false, 00:11:12.420 "data_offset": 0, 00:11:12.420 "data_size": 0 00:11:12.420 } 00:11:12.421 ] 00:11:12.421 }' 00:11:12.421 12:03:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:12.421 12:03:19 -- common/autotest_common.sh@10 -- # set +x 00:11:12.988 12:03:20 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:12.988 [2024-07-25 12:03:20.155238] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:12.988 [2024-07-25 12:03:20.155285] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13d0630 name Existed_Raid, state configuring 00:11:12.988 12:03:20 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:12.988 12:03:20 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:13.246 12:03:20 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.246 BaseBdev1 00:11:13.246 12:03:20 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:13.246 12:03:20 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:13.246 12:03:20 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:13.246 12:03:20 -- common/autotest_common.sh@889 -- # local i 00:11:13.246 12:03:20 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:13.246 12:03:20 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:13.246 12:03:20 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:13.505 12:03:20 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.763 [ 00:11:13.763 { 00:11:13.763 "name": "BaseBdev1", 00:11:13.763 "aliases": [ 00:11:13.763 "1b67c9bc-0d4f-4234-9af4-8ebba9e8eaf2" 00:11:13.763 ], 00:11:13.763 "product_name": "Malloc disk", 00:11:13.763 "block_size": 512, 00:11:13.763 "num_blocks": 65536, 00:11:13.763 "uuid": "1b67c9bc-0d4f-4234-9af4-8ebba9e8eaf2", 00:11:13.763 "assigned_rate_limits": { 00:11:13.763 "rw_ios_per_sec": 0, 00:11:13.763 "rw_mbytes_per_sec": 0, 00:11:13.763 "r_mbytes_per_sec": 0, 00:11:13.763 "w_mbytes_per_sec": 0 00:11:13.763 }, 00:11:13.763 "claimed": false, 00:11:13.763 "zoned": false, 00:11:13.763 "supported_io_types": { 00:11:13.763 "read": true, 00:11:13.763 "write": true, 00:11:13.763 "unmap": true, 00:11:13.763 "write_zeroes": true, 00:11:13.763 "flush": true, 00:11:13.763 "reset": true, 00:11:13.763 "compare": false, 00:11:13.763 "compare_and_write": false, 00:11:13.763 "abort": true, 00:11:13.763 "nvme_admin": false, 00:11:13.763 "nvme_io": false 00:11:13.763 }, 00:11:13.763 "memory_domains": [ 00:11:13.763 { 00:11:13.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.763 "dma_device_type": 2 00:11:13.763 } 00:11:13.763 ], 00:11:13.763 "driver_specific": {} 00:11:13.763 } 00:11:13.763 ] 00:11:13.763 12:03:20 -- common/autotest_common.sh@895 -- # return 0 00:11:13.763 12:03:20 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:13.763 [2024-07-25 12:03:21.026246] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.763 [2024-07-25 12:03:21.027181] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.763 [2024-07-25 12:03:21.027204] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.763 [2024-07-25 12:03:21.027210] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.763 [2024-07-25 12:03:21.027217] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.763 12:03:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.021 12:03:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:14.021 "name": "Existed_Raid", 00:11:14.021 "uuid": "ea9b386a-0aa5-4ea8-ad12-60c0e1417a4b", 00:11:14.021 "strip_size_kb": 64, 00:11:14.021 "state": "configuring", 00:11:14.021 "raid_level": "concat", 00:11:14.021 "superblock": true, 00:11:14.021 "num_base_bdevs": 3, 00:11:14.021 "num_base_bdevs_discovered": 1, 00:11:14.021 "num_base_bdevs_operational": 3, 00:11:14.021 "base_bdevs_list": [ 00:11:14.021 { 00:11:14.021 "name": "BaseBdev1", 00:11:14.021 "uuid": "1b67c9bc-0d4f-4234-9af4-8ebba9e8eaf2", 00:11:14.021 "is_configured": true, 00:11:14.021 "data_offset": 2048, 00:11:14.021 "data_size": 63488 00:11:14.021 }, 00:11:14.021 { 00:11:14.021 "name": "BaseBdev2", 00:11:14.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.021 "is_configured": false, 00:11:14.021 "data_offset": 0, 00:11:14.021 "data_size": 0 00:11:14.021 }, 00:11:14.021 { 00:11:14.021 "name": "BaseBdev3", 00:11:14.021 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.021 "is_configured": false, 00:11:14.021 "data_offset": 0, 00:11:14.021 "data_size": 0 00:11:14.021 } 00:11:14.021 ] 00:11:14.021 }' 00:11:14.021 12:03:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:14.021 12:03:21 -- common/autotest_common.sh@10 -- # set +x 00:11:14.588 12:03:21 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.588 [2024-07-25 12:03:21.843294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.588 BaseBdev2 00:11:14.588 12:03:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:14.588 12:03:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:14.588 12:03:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:14.588 12:03:21 -- common/autotest_common.sh@889 -- # local i 00:11:14.588 12:03:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:14.588 12:03:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:14.588 12:03:21 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:14.845 12:03:22 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.104 [ 00:11:15.104 { 00:11:15.104 "name": "BaseBdev2", 00:11:15.104 "aliases": [ 00:11:15.104 "4b0ae33a-33f7-4c7f-a34f-5e4f523ba55c" 00:11:15.104 ], 00:11:15.104 "product_name": "Malloc disk", 00:11:15.104 "block_size": 512, 00:11:15.104 "num_blocks": 65536, 00:11:15.104 "uuid": "4b0ae33a-33f7-4c7f-a34f-5e4f523ba55c", 00:11:15.104 "assigned_rate_limits": { 00:11:15.104 "rw_ios_per_sec": 0, 00:11:15.104 "rw_mbytes_per_sec": 0, 00:11:15.104 "r_mbytes_per_sec": 0, 00:11:15.104 "w_mbytes_per_sec": 0 00:11:15.104 }, 00:11:15.104 "claimed": true, 00:11:15.104 "claim_type": "exclusive_write", 00:11:15.104 "zoned": false, 00:11:15.104 "supported_io_types": { 00:11:15.104 "read": true, 00:11:15.104 "write": true, 00:11:15.104 "unmap": true, 00:11:15.104 "write_zeroes": true, 00:11:15.104 "flush": true, 00:11:15.104 "reset": true, 00:11:15.104 "compare": false, 00:11:15.104 "compare_and_write": false, 00:11:15.104 "abort": true, 00:11:15.104 "nvme_admin": false, 00:11:15.104 "nvme_io": false 00:11:15.104 }, 00:11:15.104 "memory_domains": [ 00:11:15.104 { 00:11:15.104 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.104 "dma_device_type": 2 00:11:15.104 } 00:11:15.104 ], 00:11:15.104 "driver_specific": {} 00:11:15.104 } 00:11:15.104 ] 00:11:15.104 12:03:22 -- common/autotest_common.sh@895 -- # return 0 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:15.104 "name": "Existed_Raid", 00:11:15.104 "uuid": "ea9b386a-0aa5-4ea8-ad12-60c0e1417a4b", 00:11:15.104 "strip_size_kb": 64, 00:11:15.104 "state": "configuring", 00:11:15.104 "raid_level": "concat", 00:11:15.104 "superblock": true, 00:11:15.104 "num_base_bdevs": 3, 00:11:15.104 "num_base_bdevs_discovered": 2, 00:11:15.104 "num_base_bdevs_operational": 3, 00:11:15.104 "base_bdevs_list": [ 00:11:15.104 { 00:11:15.104 "name": "BaseBdev1", 00:11:15.104 "uuid": "1b67c9bc-0d4f-4234-9af4-8ebba9e8eaf2", 00:11:15.104 "is_configured": true, 00:11:15.104 "data_offset": 2048, 00:11:15.104 "data_size": 63488 00:11:15.104 }, 00:11:15.104 { 00:11:15.104 "name": "BaseBdev2", 00:11:15.104 "uuid": "4b0ae33a-33f7-4c7f-a34f-5e4f523ba55c", 00:11:15.104 "is_configured": true, 00:11:15.104 "data_offset": 2048, 00:11:15.104 "data_size": 63488 00:11:15.104 }, 00:11:15.104 { 00:11:15.104 "name": "BaseBdev3", 00:11:15.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.104 "is_configured": false, 00:11:15.104 "data_offset": 0, 00:11:15.104 "data_size": 0 00:11:15.104 } 00:11:15.104 ] 00:11:15.104 }' 00:11:15.104 12:03:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:15.104 12:03:22 -- common/autotest_common.sh@10 -- # set +x 00:11:15.671 12:03:22 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.929 [2024-07-25 12:03:22.997249] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.929 [2024-07-25 12:03:22.997401] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1571180 00:11:15.929 [2024-07-25 12:03:22.997413] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:15.929 [2024-07-25 12:03:22.997550] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15718c0 00:11:15.929 [2024-07-25 12:03:22.997639] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1571180 00:11:15.929 [2024-07-25 12:03:22.997647] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1571180 00:11:15.929 [2024-07-25 12:03:22.997720] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.929 BaseBdev3 00:11:15.929 12:03:23 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:15.929 12:03:23 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:15.929 12:03:23 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:15.929 12:03:23 -- common/autotest_common.sh@889 -- # local i 00:11:15.929 12:03:23 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:15.929 12:03:23 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:15.929 12:03:23 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:15.929 12:03:23 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.186 [ 00:11:16.186 { 00:11:16.186 "name": "BaseBdev3", 00:11:16.186 "aliases": [ 00:11:16.186 "83fc618a-7210-4b92-8600-d55f127aa02b" 00:11:16.186 ], 00:11:16.186 "product_name": "Malloc disk", 00:11:16.186 "block_size": 512, 00:11:16.186 "num_blocks": 65536, 00:11:16.186 "uuid": "83fc618a-7210-4b92-8600-d55f127aa02b", 00:11:16.186 "assigned_rate_limits": { 00:11:16.186 "rw_ios_per_sec": 0, 00:11:16.186 "rw_mbytes_per_sec": 0, 00:11:16.186 "r_mbytes_per_sec": 0, 00:11:16.186 "w_mbytes_per_sec": 0 00:11:16.186 }, 00:11:16.186 "claimed": true, 00:11:16.186 "claim_type": "exclusive_write", 00:11:16.186 "zoned": false, 00:11:16.186 "supported_io_types": { 00:11:16.186 "read": true, 00:11:16.186 "write": true, 00:11:16.186 "unmap": true, 00:11:16.186 "write_zeroes": true, 00:11:16.186 "flush": true, 00:11:16.186 "reset": true, 00:11:16.186 "compare": false, 00:11:16.186 "compare_and_write": false, 00:11:16.186 "abort": true, 00:11:16.186 "nvme_admin": false, 00:11:16.186 "nvme_io": false 00:11:16.186 }, 00:11:16.186 "memory_domains": [ 00:11:16.186 { 00:11:16.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.186 "dma_device_type": 2 00:11:16.186 } 00:11:16.186 ], 00:11:16.186 "driver_specific": {} 00:11:16.186 } 00:11:16.186 ] 00:11:16.186 12:03:23 -- common/autotest_common.sh@895 -- # return 0 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.186 12:03:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.444 12:03:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:16.444 "name": "Existed_Raid", 00:11:16.444 "uuid": "ea9b386a-0aa5-4ea8-ad12-60c0e1417a4b", 00:11:16.444 "strip_size_kb": 64, 00:11:16.444 "state": "online", 00:11:16.444 "raid_level": "concat", 00:11:16.444 "superblock": true, 00:11:16.444 "num_base_bdevs": 3, 00:11:16.444 "num_base_bdevs_discovered": 3, 00:11:16.444 "num_base_bdevs_operational": 3, 00:11:16.444 "base_bdevs_list": [ 00:11:16.444 { 00:11:16.444 "name": "BaseBdev1", 00:11:16.444 "uuid": "1b67c9bc-0d4f-4234-9af4-8ebba9e8eaf2", 00:11:16.444 "is_configured": true, 00:11:16.444 "data_offset": 2048, 00:11:16.444 "data_size": 63488 00:11:16.444 }, 00:11:16.444 { 00:11:16.444 "name": "BaseBdev2", 00:11:16.444 "uuid": "4b0ae33a-33f7-4c7f-a34f-5e4f523ba55c", 00:11:16.444 "is_configured": true, 00:11:16.444 "data_offset": 2048, 00:11:16.444 "data_size": 63488 00:11:16.444 }, 00:11:16.444 { 00:11:16.444 "name": "BaseBdev3", 00:11:16.444 "uuid": "83fc618a-7210-4b92-8600-d55f127aa02b", 00:11:16.444 "is_configured": true, 00:11:16.444 "data_offset": 2048, 00:11:16.444 "data_size": 63488 00:11:16.444 } 00:11:16.444 ] 00:11:16.444 }' 00:11:16.444 12:03:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:16.444 12:03:23 -- common/autotest_common.sh@10 -- # set +x 00:11:16.701 12:03:23 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:16.959 [2024-07-25 12:03:24.132324] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.959 [2024-07-25 12:03:24.132348] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.959 [2024-07-25 12:03:24.132379] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.959 12:03:24 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.216 12:03:24 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:17.216 "name": "Existed_Raid", 00:11:17.216 "uuid": "ea9b386a-0aa5-4ea8-ad12-60c0e1417a4b", 00:11:17.216 "strip_size_kb": 64, 00:11:17.216 "state": "offline", 00:11:17.216 "raid_level": "concat", 00:11:17.216 "superblock": true, 00:11:17.216 "num_base_bdevs": 3, 00:11:17.216 "num_base_bdevs_discovered": 2, 00:11:17.216 "num_base_bdevs_operational": 2, 00:11:17.216 "base_bdevs_list": [ 00:11:17.216 { 00:11:17.216 "name": null, 00:11:17.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.216 "is_configured": false, 00:11:17.216 "data_offset": 2048, 00:11:17.216 "data_size": 63488 00:11:17.216 }, 00:11:17.216 { 00:11:17.216 "name": "BaseBdev2", 00:11:17.216 "uuid": "4b0ae33a-33f7-4c7f-a34f-5e4f523ba55c", 00:11:17.216 "is_configured": true, 00:11:17.216 "data_offset": 2048, 00:11:17.216 "data_size": 63488 00:11:17.216 }, 00:11:17.216 { 00:11:17.216 "name": "BaseBdev3", 00:11:17.216 "uuid": "83fc618a-7210-4b92-8600-d55f127aa02b", 00:11:17.216 "is_configured": true, 00:11:17.216 "data_offset": 2048, 00:11:17.216 "data_size": 63488 00:11:17.216 } 00:11:17.216 ] 00:11:17.216 }' 00:11:17.216 12:03:24 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:17.216 12:03:24 -- common/autotest_common.sh@10 -- # set +x 00:11:17.473 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:17.473 12:03:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:17.473 12:03:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:17.473 12:03:24 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.730 12:03:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:17.730 12:03:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.730 12:03:24 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:17.988 [2024-07-25 12:03:25.099615] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.988 12:03:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:17.988 12:03:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:17.988 12:03:25 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.988 12:03:25 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:18.246 [2024-07-25 12:03:25.444170] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:18.246 [2024-07-25 12:03:25.444210] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1571180 name Existed_Raid, state offline 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:18.246 12:03:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:18.504 12:03:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:18.504 12:03:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:18.504 12:03:25 -- bdev/bdev_raid.sh@287 -- # killprocess 1232280 00:11:18.504 12:03:25 -- common/autotest_common.sh@926 -- # '[' -z 1232280 ']' 00:11:18.504 12:03:25 -- common/autotest_common.sh@930 -- # kill -0 1232280 00:11:18.504 12:03:25 -- common/autotest_common.sh@931 -- # uname 00:11:18.504 12:03:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:18.504 12:03:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1232280 00:11:18.504 12:03:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:18.504 12:03:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:18.504 12:03:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1232280' 00:11:18.504 killing process with pid 1232280 00:11:18.504 12:03:25 -- common/autotest_common.sh@945 -- # kill 1232280 00:11:18.504 [2024-07-25 12:03:25.668399] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:18.504 12:03:25 -- common/autotest_common.sh@950 -- # wait 1232280 00:11:18.504 [2024-07-25 12:03:25.669315] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:18.763 00:11:18.763 real 0m9.119s 00:11:18.763 user 0m15.992s 00:11:18.763 sys 0m1.769s 00:11:18.763 12:03:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:18.763 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 ************************************ 00:11:18.763 END TEST raid_state_function_test_sb 00:11:18.763 ************************************ 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:18.763 12:03:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:18.763 12:03:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:18.763 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 ************************************ 00:11:18.763 START TEST raid_superblock_test 00:11:18.763 ************************************ 00:11:18.763 12:03:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 3 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=1233765 00:11:18.763 12:03:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1233765 /var/tmp/spdk-raid.sock 00:11:18.763 12:03:25 -- common/autotest_common.sh@819 -- # '[' -z 1233765 ']' 00:11:18.763 12:03:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:18.763 12:03:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:18.763 12:03:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:18.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:18.763 12:03:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:18.763 12:03:25 -- common/autotest_common.sh@10 -- # set +x 00:11:18.763 [2024-07-25 12:03:25.969058] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:18.763 [2024-07-25 12:03:25.969106] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1233765 ] 00:11:18.763 [2024-07-25 12:03:26.055389] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.021 [2024-07-25 12:03:26.142984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.021 [2024-07-25 12:03:26.198238] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.021 [2024-07-25 12:03:26.198269] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:19.587 12:03:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:19.587 12:03:26 -- common/autotest_common.sh@852 -- # return 0 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.587 12:03:26 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:19.846 malloc1 00:11:19.846 12:03:26 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:19.846 [2024-07-25 12:03:27.100801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:19.846 [2024-07-25 12:03:27.100840] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:19.846 [2024-07-25 12:03:27.100856] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x11d08d0 00:11:19.846 [2024-07-25 12:03:27.100864] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:19.846 [2024-07-25 12:03:27.102403] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:19.846 [2024-07-25 12:03:27.102428] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:19.846 pt1 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:19.846 12:03:27 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:20.104 malloc2 00:11:20.104 12:03:27 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:20.363 [2024-07-25 12:03:27.457560] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:20.363 [2024-07-25 12:03:27.457596] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.363 [2024-07-25 12:03:27.457609] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13781a0 00:11:20.363 [2024-07-25 12:03:27.457617] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.363 [2024-07-25 12:03:27.458729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.363 [2024-07-25 12:03:27.458751] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:20.363 pt2 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:20.363 malloc3 00:11:20.363 12:03:27 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:20.622 [2024-07-25 12:03:27.795353] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:20.622 [2024-07-25 12:03:27.795391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:20.622 [2024-07-25 12:03:27.795406] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1378700 00:11:20.622 [2024-07-25 12:03:27.795414] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:20.622 [2024-07-25 12:03:27.796590] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:20.622 [2024-07-25 12:03:27.796612] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:20.622 pt3 00:11:20.622 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:20.622 12:03:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:20.622 12:03:27 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:20.958 [2024-07-25 12:03:27.951794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:20.958 [2024-07-25 12:03:27.952792] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:20.958 [2024-07-25 12:03:27.952829] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:20.958 [2024-07-25 12:03:27.952945] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x137bcf0 00:11:20.958 [2024-07-25 12:03:27.952953] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:20.958 [2024-07-25 12:03:27.953100] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1377210 00:11:20.958 [2024-07-25 12:03:27.953194] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x137bcf0 00:11:20.958 [2024-07-25 12:03:27.953201] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x137bcf0 00:11:20.958 [2024-07-25 12:03:27.953268] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:20.958 12:03:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:20.958 12:03:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:20.958 "name": "raid_bdev1", 00:11:20.958 "uuid": "13b7288f-ba9c-4d59-95eb-62ab083209c2", 00:11:20.958 "strip_size_kb": 64, 00:11:20.958 "state": "online", 00:11:20.958 "raid_level": "concat", 00:11:20.958 "superblock": true, 00:11:20.958 "num_base_bdevs": 3, 00:11:20.958 "num_base_bdevs_discovered": 3, 00:11:20.958 "num_base_bdevs_operational": 3, 00:11:20.958 "base_bdevs_list": [ 00:11:20.959 { 00:11:20.959 "name": "pt1", 00:11:20.959 "uuid": "23b215e3-82b0-5d91-80bc-a49270623cbc", 00:11:20.959 "is_configured": true, 00:11:20.959 "data_offset": 2048, 00:11:20.959 "data_size": 63488 00:11:20.959 }, 00:11:20.959 { 00:11:20.959 "name": "pt2", 00:11:20.959 "uuid": "d494088a-a4ff-59ea-8549-aa42fddfc0ab", 00:11:20.959 "is_configured": true, 00:11:20.959 "data_offset": 2048, 00:11:20.959 "data_size": 63488 00:11:20.959 }, 00:11:20.959 { 00:11:20.959 "name": "pt3", 00:11:20.959 "uuid": "0e35f6c2-65cb-5f12-bb77-b65ee880ab2b", 00:11:20.959 "is_configured": true, 00:11:20.959 "data_offset": 2048, 00:11:20.959 "data_size": 63488 00:11:20.959 } 00:11:20.959 ] 00:11:20.959 }' 00:11:20.959 12:03:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:20.959 12:03:28 -- common/autotest_common.sh@10 -- # set +x 00:11:21.544 12:03:28 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:21.544 12:03:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:21.544 [2024-07-25 12:03:28.725889] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:21.544 12:03:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=13b7288f-ba9c-4d59-95eb-62ab083209c2 00:11:21.544 12:03:28 -- bdev/bdev_raid.sh@380 -- # '[' -z 13b7288f-ba9c-4d59-95eb-62ab083209c2 ']' 00:11:21.544 12:03:28 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:21.801 [2024-07-25 12:03:28.886160] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:21.801 [2024-07-25 12:03:28.886174] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:21.801 [2024-07-25 12:03:28.886207] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:21.801 [2024-07-25 12:03:28.886242] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:21.801 [2024-07-25 12:03:28.886249] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x137bcf0 name raid_bdev1, state offline 00:11:21.801 12:03:28 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:21.801 12:03:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:21.801 12:03:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:21.801 12:03:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:21.801 12:03:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:21.801 12:03:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:22.059 12:03:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.059 12:03:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:22.317 12:03:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:22.317 12:03:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:22.317 12:03:29 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:22.317 12:03:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:22.575 12:03:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:22.576 12:03:29 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:22.576 12:03:29 -- common/autotest_common.sh@640 -- # local es=0 00:11:22.576 12:03:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:22.576 12:03:29 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:22.576 12:03:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.576 12:03:29 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:22.576 12:03:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.576 12:03:29 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:22.576 12:03:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:22.576 12:03:29 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:22.576 12:03:29 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:22.576 12:03:29 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:22.576 [2024-07-25 12:03:29.884722] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:22.834 [2024-07-25 12:03:29.885764] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:22.834 [2024-07-25 12:03:29.885794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:22.834 [2024-07-25 12:03:29.885829] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:22.834 [2024-07-25 12:03:29.885857] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:22.834 [2024-07-25 12:03:29.885872] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:11:22.834 [2024-07-25 12:03:29.885884] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:22.834 [2024-07-25 12:03:29.885891] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11d11c0 name raid_bdev1, state configuring 00:11:22.834 request: 00:11:22.834 { 00:11:22.834 "name": "raid_bdev1", 00:11:22.834 "raid_level": "concat", 00:11:22.834 "base_bdevs": [ 00:11:22.834 "malloc1", 00:11:22.834 "malloc2", 00:11:22.834 "malloc3" 00:11:22.834 ], 00:11:22.834 "superblock": false, 00:11:22.834 "strip_size_kb": 64, 00:11:22.834 "method": "bdev_raid_create", 00:11:22.834 "req_id": 1 00:11:22.834 } 00:11:22.834 Got JSON-RPC error response 00:11:22.834 response: 00:11:22.834 { 00:11:22.834 "code": -17, 00:11:22.834 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:22.834 } 00:11:22.834 12:03:29 -- common/autotest_common.sh@643 -- # es=1 00:11:22.834 12:03:29 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:22.834 12:03:29 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:22.834 12:03:29 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:22.834 12:03:29 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:22.834 12:03:29 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:22.834 12:03:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:22.834 12:03:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:22.834 12:03:30 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:23.092 [2024-07-25 12:03:30.233737] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:23.092 [2024-07-25 12:03:30.233781] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.092 [2024-07-25 12:03:30.233797] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13797c0 00:11:23.092 [2024-07-25 12:03:30.233805] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.092 [2024-07-25 12:03:30.235029] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.092 [2024-07-25 12:03:30.235050] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:23.092 [2024-07-25 12:03:30.235103] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:23.092 [2024-07-25 12:03:30.235122] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:23.092 pt1 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.092 12:03:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.350 12:03:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:23.350 "name": "raid_bdev1", 00:11:23.350 "uuid": "13b7288f-ba9c-4d59-95eb-62ab083209c2", 00:11:23.350 "strip_size_kb": 64, 00:11:23.350 "state": "configuring", 00:11:23.350 "raid_level": "concat", 00:11:23.350 "superblock": true, 00:11:23.350 "num_base_bdevs": 3, 00:11:23.350 "num_base_bdevs_discovered": 1, 00:11:23.350 "num_base_bdevs_operational": 3, 00:11:23.350 "base_bdevs_list": [ 00:11:23.350 { 00:11:23.350 "name": "pt1", 00:11:23.350 "uuid": "23b215e3-82b0-5d91-80bc-a49270623cbc", 00:11:23.350 "is_configured": true, 00:11:23.350 "data_offset": 2048, 00:11:23.350 "data_size": 63488 00:11:23.350 }, 00:11:23.350 { 00:11:23.350 "name": null, 00:11:23.350 "uuid": "d494088a-a4ff-59ea-8549-aa42fddfc0ab", 00:11:23.350 "is_configured": false, 00:11:23.350 "data_offset": 2048, 00:11:23.350 "data_size": 63488 00:11:23.350 }, 00:11:23.350 { 00:11:23.350 "name": null, 00:11:23.350 "uuid": "0e35f6c2-65cb-5f12-bb77-b65ee880ab2b", 00:11:23.350 "is_configured": false, 00:11:23.350 "data_offset": 2048, 00:11:23.350 "data_size": 63488 00:11:23.350 } 00:11:23.350 ] 00:11:23.350 }' 00:11:23.350 12:03:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:23.350 12:03:30 -- common/autotest_common.sh@10 -- # set +x 00:11:23.608 12:03:30 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:11:23.608 12:03:30 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:23.866 [2024-07-25 12:03:31.047836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:23.866 [2024-07-25 12:03:31.047876] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.866 [2024-07-25 12:03:31.047890] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13774d0 00:11:23.866 [2024-07-25 12:03:31.047899] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.866 [2024-07-25 12:03:31.048147] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.866 [2024-07-25 12:03:31.048158] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:23.866 [2024-07-25 12:03:31.048206] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:23.866 [2024-07-25 12:03:31.048219] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:23.866 pt2 00:11:23.866 12:03:31 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:24.124 [2024-07-25 12:03:31.228319] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:24.124 12:03:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:24.124 "name": "raid_bdev1", 00:11:24.124 "uuid": "13b7288f-ba9c-4d59-95eb-62ab083209c2", 00:11:24.124 "strip_size_kb": 64, 00:11:24.124 "state": "configuring", 00:11:24.124 "raid_level": "concat", 00:11:24.124 "superblock": true, 00:11:24.124 "num_base_bdevs": 3, 00:11:24.124 "num_base_bdevs_discovered": 1, 00:11:24.124 "num_base_bdevs_operational": 3, 00:11:24.124 "base_bdevs_list": [ 00:11:24.124 { 00:11:24.124 "name": "pt1", 00:11:24.124 "uuid": "23b215e3-82b0-5d91-80bc-a49270623cbc", 00:11:24.124 "is_configured": true, 00:11:24.124 "data_offset": 2048, 00:11:24.124 "data_size": 63488 00:11:24.124 }, 00:11:24.124 { 00:11:24.124 "name": null, 00:11:24.124 "uuid": "d494088a-a4ff-59ea-8549-aa42fddfc0ab", 00:11:24.124 "is_configured": false, 00:11:24.124 "data_offset": 2048, 00:11:24.124 "data_size": 63488 00:11:24.124 }, 00:11:24.124 { 00:11:24.124 "name": null, 00:11:24.124 "uuid": "0e35f6c2-65cb-5f12-bb77-b65ee880ab2b", 00:11:24.124 "is_configured": false, 00:11:24.125 "data_offset": 2048, 00:11:24.125 "data_size": 63488 00:11:24.125 } 00:11:24.125 ] 00:11:24.125 }' 00:11:24.125 12:03:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:24.125 12:03:31 -- common/autotest_common.sh@10 -- # set +x 00:11:24.690 12:03:31 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:24.690 12:03:31 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:24.690 12:03:31 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:24.949 [2024-07-25 12:03:32.018334] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:24.949 [2024-07-25 12:03:32.018369] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.949 [2024-07-25 12:03:32.018383] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x13783d0 00:11:24.949 [2024-07-25 12:03:32.018391] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.949 [2024-07-25 12:03:32.018625] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.949 [2024-07-25 12:03:32.018636] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:24.949 [2024-07-25 12:03:32.018682] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:24.949 [2024-07-25 12:03:32.018694] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:24.949 pt2 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:24.949 [2024-07-25 12:03:32.186766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:24.949 [2024-07-25 12:03:32.186788] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:24.949 [2024-07-25 12:03:32.186799] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x137c4a0 00:11:24.949 [2024-07-25 12:03:32.186807] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:24.949 [2024-07-25 12:03:32.186986] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:24.949 [2024-07-25 12:03:32.186996] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:24.949 [2024-07-25 12:03:32.187028] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:24.949 [2024-07-25 12:03:32.187043] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:24.949 [2024-07-25 12:03:32.187106] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x13777f0 00:11:24.949 [2024-07-25 12:03:32.187113] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:24.949 [2024-07-25 12:03:32.187214] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x137b100 00:11:24.949 [2024-07-25 12:03:32.187314] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x13777f0 00:11:24.949 [2024-07-25 12:03:32.187320] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x13777f0 00:11:24.949 [2024-07-25 12:03:32.187381] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:24.949 pt3 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:24.949 12:03:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.208 12:03:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:25.208 "name": "raid_bdev1", 00:11:25.208 "uuid": "13b7288f-ba9c-4d59-95eb-62ab083209c2", 00:11:25.208 "strip_size_kb": 64, 00:11:25.208 "state": "online", 00:11:25.208 "raid_level": "concat", 00:11:25.208 "superblock": true, 00:11:25.208 "num_base_bdevs": 3, 00:11:25.208 "num_base_bdevs_discovered": 3, 00:11:25.208 "num_base_bdevs_operational": 3, 00:11:25.208 "base_bdevs_list": [ 00:11:25.208 { 00:11:25.208 "name": "pt1", 00:11:25.208 "uuid": "23b215e3-82b0-5d91-80bc-a49270623cbc", 00:11:25.208 "is_configured": true, 00:11:25.208 "data_offset": 2048, 00:11:25.208 "data_size": 63488 00:11:25.208 }, 00:11:25.208 { 00:11:25.208 "name": "pt2", 00:11:25.208 "uuid": "d494088a-a4ff-59ea-8549-aa42fddfc0ab", 00:11:25.208 "is_configured": true, 00:11:25.208 "data_offset": 2048, 00:11:25.208 "data_size": 63488 00:11:25.208 }, 00:11:25.208 { 00:11:25.208 "name": "pt3", 00:11:25.208 "uuid": "0e35f6c2-65cb-5f12-bb77-b65ee880ab2b", 00:11:25.208 "is_configured": true, 00:11:25.208 "data_offset": 2048, 00:11:25.208 "data_size": 63488 00:11:25.208 } 00:11:25.208 ] 00:11:25.208 }' 00:11:25.208 12:03:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:25.208 12:03:32 -- common/autotest_common.sh@10 -- # set +x 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:25.776 [2024-07-25 12:03:32.968924] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@430 -- # '[' 13b7288f-ba9c-4d59-95eb-62ab083209c2 '!=' 13b7288f-ba9c-4d59-95eb-62ab083209c2 ']' 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@197 -- # return 1 00:11:25.776 12:03:32 -- bdev/bdev_raid.sh@511 -- # killprocess 1233765 00:11:25.776 12:03:32 -- common/autotest_common.sh@926 -- # '[' -z 1233765 ']' 00:11:25.776 12:03:32 -- common/autotest_common.sh@930 -- # kill -0 1233765 00:11:25.776 12:03:32 -- common/autotest_common.sh@931 -- # uname 00:11:25.776 12:03:32 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:25.776 12:03:32 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1233765 00:11:25.776 12:03:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:25.776 12:03:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:25.776 12:03:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1233765' 00:11:25.776 killing process with pid 1233765 00:11:25.776 12:03:33 -- common/autotest_common.sh@945 -- # kill 1233765 00:11:25.776 [2024-07-25 12:03:33.016170] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:25.776 [2024-07-25 12:03:33.016218] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:25.776 [2024-07-25 12:03:33.016255] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:25.776 [2024-07-25 12:03:33.016262] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x13777f0 name raid_bdev1, state offline 00:11:25.776 12:03:33 -- common/autotest_common.sh@950 -- # wait 1233765 00:11:25.776 [2024-07-25 12:03:33.042078] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:26.034 00:11:26.034 real 0m7.332s 00:11:26.034 user 0m12.691s 00:11:26.034 sys 0m1.433s 00:11:26.034 12:03:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.034 12:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:26.034 ************************************ 00:11:26.034 END TEST raid_superblock_test 00:11:26.034 ************************************ 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:26.034 12:03:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:26.034 12:03:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:26.034 12:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:26.034 ************************************ 00:11:26.034 START TEST raid_state_function_test 00:11:26.034 ************************************ 00:11:26.034 12:03:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 false 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=1234846 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1234846' 00:11:26.034 Process raid pid: 1234846 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:26.034 12:03:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1234846 /var/tmp/spdk-raid.sock 00:11:26.034 12:03:33 -- common/autotest_common.sh@819 -- # '[' -z 1234846 ']' 00:11:26.034 12:03:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:26.034 12:03:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:26.034 12:03:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:26.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:26.034 12:03:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:26.034 12:03:33 -- common/autotest_common.sh@10 -- # set +x 00:11:26.293 [2024-07-25 12:03:33.361075] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:26.293 [2024-07-25 12:03:33.361128] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.293 [2024-07-25 12:03:33.449806] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.293 [2024-07-25 12:03:33.538992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.293 [2024-07-25 12:03:33.601369] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.293 [2024-07-25 12:03:33.601397] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.860 12:03:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:26.860 12:03:34 -- common/autotest_common.sh@852 -- # return 0 00:11:26.860 12:03:34 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:27.118 [2024-07-25 12:03:34.293685] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:27.118 [2024-07-25 12:03:34.293720] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:27.118 [2024-07-25 12:03:34.293726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:27.118 [2024-07-25 12:03:34.293734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:27.118 [2024-07-25 12:03:34.293740] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:27.118 [2024-07-25 12:03:34.293747] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:27.118 12:03:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.377 12:03:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:27.377 "name": "Existed_Raid", 00:11:27.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.377 "strip_size_kb": 0, 00:11:27.377 "state": "configuring", 00:11:27.377 "raid_level": "raid1", 00:11:27.377 "superblock": false, 00:11:27.377 "num_base_bdevs": 3, 00:11:27.377 "num_base_bdevs_discovered": 0, 00:11:27.377 "num_base_bdevs_operational": 3, 00:11:27.377 "base_bdevs_list": [ 00:11:27.377 { 00:11:27.377 "name": "BaseBdev1", 00:11:27.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.377 "is_configured": false, 00:11:27.377 "data_offset": 0, 00:11:27.377 "data_size": 0 00:11:27.377 }, 00:11:27.377 { 00:11:27.377 "name": "BaseBdev2", 00:11:27.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.377 "is_configured": false, 00:11:27.377 "data_offset": 0, 00:11:27.377 "data_size": 0 00:11:27.377 }, 00:11:27.377 { 00:11:27.377 "name": "BaseBdev3", 00:11:27.377 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.377 "is_configured": false, 00:11:27.377 "data_offset": 0, 00:11:27.377 "data_size": 0 00:11:27.377 } 00:11:27.377 ] 00:11:27.377 }' 00:11:27.377 12:03:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:27.377 12:03:34 -- common/autotest_common.sh@10 -- # set +x 00:11:27.945 12:03:34 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:27.945 [2024-07-25 12:03:35.099681] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:27.945 [2024-07-25 12:03:35.099700] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205ad60 name Existed_Raid, state configuring 00:11:27.945 12:03:35 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:28.203 [2024-07-25 12:03:35.280152] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:28.203 [2024-07-25 12:03:35.280172] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:28.203 [2024-07-25 12:03:35.280178] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:28.203 [2024-07-25 12:03:35.280186] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:28.203 [2024-07-25 12:03:35.280191] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:28.203 [2024-07-25 12:03:35.280199] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:28.203 12:03:35 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:28.203 [2024-07-25 12:03:35.461214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:28.203 BaseBdev1 00:11:28.203 12:03:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:28.203 12:03:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:28.203 12:03:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:28.203 12:03:35 -- common/autotest_common.sh@889 -- # local i 00:11:28.204 12:03:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:28.204 12:03:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:28.204 12:03:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:28.462 12:03:35 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:28.720 [ 00:11:28.720 { 00:11:28.720 "name": "BaseBdev1", 00:11:28.720 "aliases": [ 00:11:28.720 "85a5d486-a6ef-4a80-8e96-bf8e016074c5" 00:11:28.720 ], 00:11:28.720 "product_name": "Malloc disk", 00:11:28.720 "block_size": 512, 00:11:28.720 "num_blocks": 65536, 00:11:28.720 "uuid": "85a5d486-a6ef-4a80-8e96-bf8e016074c5", 00:11:28.720 "assigned_rate_limits": { 00:11:28.720 "rw_ios_per_sec": 0, 00:11:28.720 "rw_mbytes_per_sec": 0, 00:11:28.720 "r_mbytes_per_sec": 0, 00:11:28.720 "w_mbytes_per_sec": 0 00:11:28.720 }, 00:11:28.720 "claimed": true, 00:11:28.720 "claim_type": "exclusive_write", 00:11:28.720 "zoned": false, 00:11:28.720 "supported_io_types": { 00:11:28.720 "read": true, 00:11:28.720 "write": true, 00:11:28.720 "unmap": true, 00:11:28.720 "write_zeroes": true, 00:11:28.720 "flush": true, 00:11:28.720 "reset": true, 00:11:28.720 "compare": false, 00:11:28.720 "compare_and_write": false, 00:11:28.720 "abort": true, 00:11:28.720 "nvme_admin": false, 00:11:28.720 "nvme_io": false 00:11:28.720 }, 00:11:28.720 "memory_domains": [ 00:11:28.720 { 00:11:28.720 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.720 "dma_device_type": 2 00:11:28.720 } 00:11:28.720 ], 00:11:28.720 "driver_specific": {} 00:11:28.720 } 00:11:28.720 ] 00:11:28.720 12:03:35 -- common/autotest_common.sh@895 -- # return 0 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:28.720 "name": "Existed_Raid", 00:11:28.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.720 "strip_size_kb": 0, 00:11:28.720 "state": "configuring", 00:11:28.720 "raid_level": "raid1", 00:11:28.720 "superblock": false, 00:11:28.720 "num_base_bdevs": 3, 00:11:28.720 "num_base_bdevs_discovered": 1, 00:11:28.720 "num_base_bdevs_operational": 3, 00:11:28.720 "base_bdevs_list": [ 00:11:28.720 { 00:11:28.720 "name": "BaseBdev1", 00:11:28.720 "uuid": "85a5d486-a6ef-4a80-8e96-bf8e016074c5", 00:11:28.720 "is_configured": true, 00:11:28.720 "data_offset": 0, 00:11:28.720 "data_size": 65536 00:11:28.720 }, 00:11:28.720 { 00:11:28.720 "name": "BaseBdev2", 00:11:28.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.720 "is_configured": false, 00:11:28.720 "data_offset": 0, 00:11:28.720 "data_size": 0 00:11:28.720 }, 00:11:28.720 { 00:11:28.720 "name": "BaseBdev3", 00:11:28.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:28.720 "is_configured": false, 00:11:28.720 "data_offset": 0, 00:11:28.720 "data_size": 0 00:11:28.720 } 00:11:28.720 ] 00:11:28.720 }' 00:11:28.720 12:03:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:28.720 12:03:35 -- common/autotest_common.sh@10 -- # set +x 00:11:29.287 12:03:36 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:29.544 [2024-07-25 12:03:36.616191] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:29.544 [2024-07-25 12:03:36.616220] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205a630 name Existed_Raid, state configuring 00:11:29.544 12:03:36 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:11:29.544 12:03:36 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:29.544 [2024-07-25 12:03:36.784646] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.544 [2024-07-25 12:03:36.785700] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:29.545 [2024-07-25 12:03:36.785724] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:29.545 [2024-07-25 12:03:36.785730] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:29.545 [2024-07-25 12:03:36.785737] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.545 12:03:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.803 12:03:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:29.803 "name": "Existed_Raid", 00:11:29.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.803 "strip_size_kb": 0, 00:11:29.803 "state": "configuring", 00:11:29.803 "raid_level": "raid1", 00:11:29.803 "superblock": false, 00:11:29.803 "num_base_bdevs": 3, 00:11:29.803 "num_base_bdevs_discovered": 1, 00:11:29.803 "num_base_bdevs_operational": 3, 00:11:29.803 "base_bdevs_list": [ 00:11:29.803 { 00:11:29.803 "name": "BaseBdev1", 00:11:29.803 "uuid": "85a5d486-a6ef-4a80-8e96-bf8e016074c5", 00:11:29.803 "is_configured": true, 00:11:29.803 "data_offset": 0, 00:11:29.803 "data_size": 65536 00:11:29.803 }, 00:11:29.803 { 00:11:29.803 "name": "BaseBdev2", 00:11:29.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.803 "is_configured": false, 00:11:29.803 "data_offset": 0, 00:11:29.803 "data_size": 0 00:11:29.803 }, 00:11:29.803 { 00:11:29.803 "name": "BaseBdev3", 00:11:29.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.803 "is_configured": false, 00:11:29.803 "data_offset": 0, 00:11:29.803 "data_size": 0 00:11:29.803 } 00:11:29.803 ] 00:11:29.803 }' 00:11:29.803 12:03:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:29.803 12:03:36 -- common/autotest_common.sh@10 -- # set +x 00:11:30.370 12:03:37 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:30.370 [2024-07-25 12:03:37.621625] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:30.370 BaseBdev2 00:11:30.370 12:03:37 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:30.370 12:03:37 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:30.370 12:03:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:30.370 12:03:37 -- common/autotest_common.sh@889 -- # local i 00:11:30.370 12:03:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:30.370 12:03:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:30.371 12:03:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:30.629 12:03:37 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:30.887 [ 00:11:30.887 { 00:11:30.887 "name": "BaseBdev2", 00:11:30.887 "aliases": [ 00:11:30.887 "4fca8d3a-63ab-4d7d-a69a-5a4d5e14c8aa" 00:11:30.887 ], 00:11:30.887 "product_name": "Malloc disk", 00:11:30.887 "block_size": 512, 00:11:30.887 "num_blocks": 65536, 00:11:30.887 "uuid": "4fca8d3a-63ab-4d7d-a69a-5a4d5e14c8aa", 00:11:30.887 "assigned_rate_limits": { 00:11:30.887 "rw_ios_per_sec": 0, 00:11:30.887 "rw_mbytes_per_sec": 0, 00:11:30.887 "r_mbytes_per_sec": 0, 00:11:30.887 "w_mbytes_per_sec": 0 00:11:30.887 }, 00:11:30.887 "claimed": true, 00:11:30.887 "claim_type": "exclusive_write", 00:11:30.887 "zoned": false, 00:11:30.887 "supported_io_types": { 00:11:30.887 "read": true, 00:11:30.887 "write": true, 00:11:30.887 "unmap": true, 00:11:30.887 "write_zeroes": true, 00:11:30.887 "flush": true, 00:11:30.887 "reset": true, 00:11:30.887 "compare": false, 00:11:30.887 "compare_and_write": false, 00:11:30.887 "abort": true, 00:11:30.887 "nvme_admin": false, 00:11:30.887 "nvme_io": false 00:11:30.887 }, 00:11:30.887 "memory_domains": [ 00:11:30.887 { 00:11:30.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:30.887 "dma_device_type": 2 00:11:30.887 } 00:11:30.887 ], 00:11:30.887 "driver_specific": {} 00:11:30.887 } 00:11:30.887 ] 00:11:30.887 12:03:37 -- common/autotest_common.sh@895 -- # return 0 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:30.887 12:03:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.887 12:03:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:30.888 "name": "Existed_Raid", 00:11:30.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.888 "strip_size_kb": 0, 00:11:30.888 "state": "configuring", 00:11:30.888 "raid_level": "raid1", 00:11:30.888 "superblock": false, 00:11:30.888 "num_base_bdevs": 3, 00:11:30.888 "num_base_bdevs_discovered": 2, 00:11:30.888 "num_base_bdevs_operational": 3, 00:11:30.888 "base_bdevs_list": [ 00:11:30.888 { 00:11:30.888 "name": "BaseBdev1", 00:11:30.888 "uuid": "85a5d486-a6ef-4a80-8e96-bf8e016074c5", 00:11:30.888 "is_configured": true, 00:11:30.888 "data_offset": 0, 00:11:30.888 "data_size": 65536 00:11:30.888 }, 00:11:30.888 { 00:11:30.888 "name": "BaseBdev2", 00:11:30.888 "uuid": "4fca8d3a-63ab-4d7d-a69a-5a4d5e14c8aa", 00:11:30.888 "is_configured": true, 00:11:30.888 "data_offset": 0, 00:11:30.888 "data_size": 65536 00:11:30.888 }, 00:11:30.888 { 00:11:30.888 "name": "BaseBdev3", 00:11:30.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.888 "is_configured": false, 00:11:30.888 "data_offset": 0, 00:11:30.888 "data_size": 0 00:11:30.888 } 00:11:30.888 ] 00:11:30.888 }' 00:11:30.888 12:03:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:30.888 12:03:38 -- common/autotest_common.sh@10 -- # set +x 00:11:31.454 12:03:38 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:31.712 [2024-07-25 12:03:38.811489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.712 [2024-07-25 12:03:38.811525] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x205b630 00:11:31.712 [2024-07-25 12:03:38.811531] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:31.712 [2024-07-25 12:03:38.811702] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x205d200 00:11:31.713 [2024-07-25 12:03:38.811785] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x205b630 00:11:31.713 [2024-07-25 12:03:38.811792] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x205b630 00:11:31.713 [2024-07-25 12:03:38.811925] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.713 BaseBdev3 00:11:31.713 12:03:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:31.713 12:03:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:31.713 12:03:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:31.713 12:03:38 -- common/autotest_common.sh@889 -- # local i 00:11:31.713 12:03:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:31.713 12:03:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:31.713 12:03:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:31.713 12:03:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:31.971 [ 00:11:31.971 { 00:11:31.971 "name": "BaseBdev3", 00:11:31.971 "aliases": [ 00:11:31.971 "b9e6910b-81ee-47ea-be5d-dd440c195693" 00:11:31.971 ], 00:11:31.971 "product_name": "Malloc disk", 00:11:31.971 "block_size": 512, 00:11:31.971 "num_blocks": 65536, 00:11:31.971 "uuid": "b9e6910b-81ee-47ea-be5d-dd440c195693", 00:11:31.971 "assigned_rate_limits": { 00:11:31.971 "rw_ios_per_sec": 0, 00:11:31.971 "rw_mbytes_per_sec": 0, 00:11:31.971 "r_mbytes_per_sec": 0, 00:11:31.971 "w_mbytes_per_sec": 0 00:11:31.971 }, 00:11:31.971 "claimed": true, 00:11:31.971 "claim_type": "exclusive_write", 00:11:31.971 "zoned": false, 00:11:31.971 "supported_io_types": { 00:11:31.971 "read": true, 00:11:31.971 "write": true, 00:11:31.971 "unmap": true, 00:11:31.971 "write_zeroes": true, 00:11:31.971 "flush": true, 00:11:31.971 "reset": true, 00:11:31.971 "compare": false, 00:11:31.971 "compare_and_write": false, 00:11:31.971 "abort": true, 00:11:31.971 "nvme_admin": false, 00:11:31.971 "nvme_io": false 00:11:31.971 }, 00:11:31.971 "memory_domains": [ 00:11:31.971 { 00:11:31.971 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.971 "dma_device_type": 2 00:11:31.971 } 00:11:31.971 ], 00:11:31.971 "driver_specific": {} 00:11:31.971 } 00:11:31.971 ] 00:11:31.971 12:03:39 -- common/autotest_common.sh@895 -- # return 0 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.971 12:03:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.229 12:03:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:32.229 "name": "Existed_Raid", 00:11:32.229 "uuid": "5e149665-0a70-4950-a454-5736ce9095e8", 00:11:32.229 "strip_size_kb": 0, 00:11:32.229 "state": "online", 00:11:32.229 "raid_level": "raid1", 00:11:32.229 "superblock": false, 00:11:32.229 "num_base_bdevs": 3, 00:11:32.229 "num_base_bdevs_discovered": 3, 00:11:32.229 "num_base_bdevs_operational": 3, 00:11:32.229 "base_bdevs_list": [ 00:11:32.229 { 00:11:32.229 "name": "BaseBdev1", 00:11:32.229 "uuid": "85a5d486-a6ef-4a80-8e96-bf8e016074c5", 00:11:32.229 "is_configured": true, 00:11:32.229 "data_offset": 0, 00:11:32.229 "data_size": 65536 00:11:32.229 }, 00:11:32.229 { 00:11:32.229 "name": "BaseBdev2", 00:11:32.229 "uuid": "4fca8d3a-63ab-4d7d-a69a-5a4d5e14c8aa", 00:11:32.229 "is_configured": true, 00:11:32.229 "data_offset": 0, 00:11:32.229 "data_size": 65536 00:11:32.229 }, 00:11:32.229 { 00:11:32.229 "name": "BaseBdev3", 00:11:32.229 "uuid": "b9e6910b-81ee-47ea-be5d-dd440c195693", 00:11:32.229 "is_configured": true, 00:11:32.229 "data_offset": 0, 00:11:32.229 "data_size": 65536 00:11:32.229 } 00:11:32.229 ] 00:11:32.229 }' 00:11:32.229 12:03:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:32.229 12:03:39 -- common/autotest_common.sh@10 -- # set +x 00:11:32.799 12:03:39 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:32.799 [2024-07-25 12:03:39.986584] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:32.799 12:03:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.058 12:03:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:33.058 "name": "Existed_Raid", 00:11:33.058 "uuid": "5e149665-0a70-4950-a454-5736ce9095e8", 00:11:33.058 "strip_size_kb": 0, 00:11:33.058 "state": "online", 00:11:33.058 "raid_level": "raid1", 00:11:33.058 "superblock": false, 00:11:33.058 "num_base_bdevs": 3, 00:11:33.058 "num_base_bdevs_discovered": 2, 00:11:33.058 "num_base_bdevs_operational": 2, 00:11:33.058 "base_bdevs_list": [ 00:11:33.058 { 00:11:33.058 "name": null, 00:11:33.058 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:33.058 "is_configured": false, 00:11:33.058 "data_offset": 0, 00:11:33.058 "data_size": 65536 00:11:33.058 }, 00:11:33.058 { 00:11:33.058 "name": "BaseBdev2", 00:11:33.058 "uuid": "4fca8d3a-63ab-4d7d-a69a-5a4d5e14c8aa", 00:11:33.058 "is_configured": true, 00:11:33.058 "data_offset": 0, 00:11:33.058 "data_size": 65536 00:11:33.058 }, 00:11:33.058 { 00:11:33.058 "name": "BaseBdev3", 00:11:33.058 "uuid": "b9e6910b-81ee-47ea-be5d-dd440c195693", 00:11:33.058 "is_configured": true, 00:11:33.058 "data_offset": 0, 00:11:33.058 "data_size": 65536 00:11:33.058 } 00:11:33.058 ] 00:11:33.058 }' 00:11:33.058 12:03:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:33.058 12:03:40 -- common/autotest_common.sh@10 -- # set +x 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:33.625 12:03:40 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:33.626 12:03:40 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:33.884 [2024-07-25 12:03:40.994154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:33.884 12:03:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:33.884 12:03:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:33.884 12:03:41 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:33.884 12:03:41 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:34.142 [2024-07-25 12:03:41.351021] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:34.142 [2024-07-25 12:03:41.351048] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:34.142 [2024-07-25 12:03:41.351072] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:34.142 [2024-07-25 12:03:41.360893] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:34.142 [2024-07-25 12:03:41.360907] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x205b630 name Existed_Raid, state offline 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.142 12:03:41 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:34.401 12:03:41 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:34.401 12:03:41 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:34.401 12:03:41 -- bdev/bdev_raid.sh@287 -- # killprocess 1234846 00:11:34.401 12:03:41 -- common/autotest_common.sh@926 -- # '[' -z 1234846 ']' 00:11:34.401 12:03:41 -- common/autotest_common.sh@930 -- # kill -0 1234846 00:11:34.401 12:03:41 -- common/autotest_common.sh@931 -- # uname 00:11:34.401 12:03:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:34.401 12:03:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1234846 00:11:34.401 12:03:41 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:34.401 12:03:41 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:34.401 12:03:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1234846' 00:11:34.401 killing process with pid 1234846 00:11:34.401 12:03:41 -- common/autotest_common.sh@945 -- # kill 1234846 00:11:34.401 [2024-07-25 12:03:41.593258] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:34.401 12:03:41 -- common/autotest_common.sh@950 -- # wait 1234846 00:11:34.401 [2024-07-25 12:03:41.594161] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:34.659 00:11:34.659 real 0m8.522s 00:11:34.659 user 0m14.875s 00:11:34.659 sys 0m1.708s 00:11:34.659 12:03:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:34.659 12:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.659 ************************************ 00:11:34.659 END TEST raid_state_function_test 00:11:34.659 ************************************ 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:34.659 12:03:41 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:34.659 12:03:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:34.659 12:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.659 ************************************ 00:11:34.659 START TEST raid_state_function_test_sb 00:11:34.659 ************************************ 00:11:34.659 12:03:41 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 3 true 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=3 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@226 -- # raid_pid=1236257 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1236257' 00:11:34.659 Process raid pid: 1236257 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:34.659 12:03:41 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1236257 /var/tmp/spdk-raid.sock 00:11:34.659 12:03:41 -- common/autotest_common.sh@819 -- # '[' -z 1236257 ']' 00:11:34.659 12:03:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:34.659 12:03:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:34.659 12:03:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:34.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:34.660 12:03:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:34.660 12:03:41 -- common/autotest_common.sh@10 -- # set +x 00:11:34.660 [2024-07-25 12:03:41.928476] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:34.660 [2024-07-25 12:03:41.928527] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:34.920 [2024-07-25 12:03:42.017422] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:34.920 [2024-07-25 12:03:42.107252] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:34.920 [2024-07-25 12:03:42.167110] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:34.920 [2024-07-25 12:03:42.167136] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.523 12:03:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:35.523 12:03:42 -- common/autotest_common.sh@852 -- # return 0 00:11:35.523 12:03:42 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:35.782 [2024-07-25 12:03:42.870853] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.782 [2024-07-25 12:03:42.870882] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.782 [2024-07-25 12:03:42.870888] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.782 [2024-07-25 12:03:42.870895] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.782 [2024-07-25 12:03:42.870901] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.782 [2024-07-25 12:03:42.870908] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.782 12:03:42 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:35.782 12:03:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:35.782 "name": "Existed_Raid", 00:11:35.782 "uuid": "d00f34cd-44f7-48ee-9f8d-7526ee489dda", 00:11:35.782 "strip_size_kb": 0, 00:11:35.782 "state": "configuring", 00:11:35.782 "raid_level": "raid1", 00:11:35.782 "superblock": true, 00:11:35.782 "num_base_bdevs": 3, 00:11:35.782 "num_base_bdevs_discovered": 0, 00:11:35.782 "num_base_bdevs_operational": 3, 00:11:35.782 "base_bdevs_list": [ 00:11:35.782 { 00:11:35.782 "name": "BaseBdev1", 00:11:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.782 "is_configured": false, 00:11:35.782 "data_offset": 0, 00:11:35.782 "data_size": 0 00:11:35.782 }, 00:11:35.782 { 00:11:35.782 "name": "BaseBdev2", 00:11:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.782 "is_configured": false, 00:11:35.782 "data_offset": 0, 00:11:35.782 "data_size": 0 00:11:35.782 }, 00:11:35.782 { 00:11:35.782 "name": "BaseBdev3", 00:11:35.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:35.782 "is_configured": false, 00:11:35.782 "data_offset": 0, 00:11:35.782 "data_size": 0 00:11:35.782 } 00:11:35.782 ] 00:11:35.782 }' 00:11:35.782 12:03:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:35.782 12:03:43 -- common/autotest_common.sh@10 -- # set +x 00:11:36.349 12:03:43 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:36.607 [2024-07-25 12:03:43.696896] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.607 [2024-07-25 12:03:43.696915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220cd60 name Existed_Raid, state configuring 00:11:36.607 12:03:43 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:36.607 [2024-07-25 12:03:43.869358] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:36.607 [2024-07-25 12:03:43.869378] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:36.607 [2024-07-25 12:03:43.869383] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.607 [2024-07-25 12:03:43.869391] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.607 [2024-07-25 12:03:43.869396] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.607 [2024-07-25 12:03:43.869404] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.607 12:03:43 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:36.865 [2024-07-25 12:03:44.042397] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.865 BaseBdev1 00:11:36.866 12:03:44 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:36.866 12:03:44 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:36.866 12:03:44 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:36.866 12:03:44 -- common/autotest_common.sh@889 -- # local i 00:11:36.866 12:03:44 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:36.866 12:03:44 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:36.866 12:03:44 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.124 12:03:44 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:37.124 [ 00:11:37.124 { 00:11:37.124 "name": "BaseBdev1", 00:11:37.124 "aliases": [ 00:11:37.124 "f634a8c6-6974-4b1f-b6a4-2bb414603d8a" 00:11:37.124 ], 00:11:37.124 "product_name": "Malloc disk", 00:11:37.124 "block_size": 512, 00:11:37.124 "num_blocks": 65536, 00:11:37.124 "uuid": "f634a8c6-6974-4b1f-b6a4-2bb414603d8a", 00:11:37.124 "assigned_rate_limits": { 00:11:37.124 "rw_ios_per_sec": 0, 00:11:37.124 "rw_mbytes_per_sec": 0, 00:11:37.124 "r_mbytes_per_sec": 0, 00:11:37.124 "w_mbytes_per_sec": 0 00:11:37.124 }, 00:11:37.124 "claimed": true, 00:11:37.124 "claim_type": "exclusive_write", 00:11:37.124 "zoned": false, 00:11:37.124 "supported_io_types": { 00:11:37.124 "read": true, 00:11:37.124 "write": true, 00:11:37.124 "unmap": true, 00:11:37.124 "write_zeroes": true, 00:11:37.124 "flush": true, 00:11:37.124 "reset": true, 00:11:37.124 "compare": false, 00:11:37.124 "compare_and_write": false, 00:11:37.124 "abort": true, 00:11:37.124 "nvme_admin": false, 00:11:37.124 "nvme_io": false 00:11:37.124 }, 00:11:37.124 "memory_domains": [ 00:11:37.124 { 00:11:37.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:37.124 "dma_device_type": 2 00:11:37.124 } 00:11:37.124 ], 00:11:37.124 "driver_specific": {} 00:11:37.124 } 00:11:37.124 ] 00:11:37.124 12:03:44 -- common/autotest_common.sh@895 -- # return 0 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:37.124 12:03:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.383 12:03:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:37.383 "name": "Existed_Raid", 00:11:37.383 "uuid": "afa0f1f7-beaa-4a6e-a4e6-e19894bb7d55", 00:11:37.383 "strip_size_kb": 0, 00:11:37.383 "state": "configuring", 00:11:37.383 "raid_level": "raid1", 00:11:37.383 "superblock": true, 00:11:37.383 "num_base_bdevs": 3, 00:11:37.383 "num_base_bdevs_discovered": 1, 00:11:37.383 "num_base_bdevs_operational": 3, 00:11:37.383 "base_bdevs_list": [ 00:11:37.383 { 00:11:37.383 "name": "BaseBdev1", 00:11:37.383 "uuid": "f634a8c6-6974-4b1f-b6a4-2bb414603d8a", 00:11:37.383 "is_configured": true, 00:11:37.383 "data_offset": 2048, 00:11:37.383 "data_size": 63488 00:11:37.383 }, 00:11:37.383 { 00:11:37.383 "name": "BaseBdev2", 00:11:37.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.383 "is_configured": false, 00:11:37.383 "data_offset": 0, 00:11:37.383 "data_size": 0 00:11:37.383 }, 00:11:37.383 { 00:11:37.383 "name": "BaseBdev3", 00:11:37.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.383 "is_configured": false, 00:11:37.383 "data_offset": 0, 00:11:37.383 "data_size": 0 00:11:37.383 } 00:11:37.383 ] 00:11:37.383 }' 00:11:37.383 12:03:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:37.383 12:03:44 -- common/autotest_common.sh@10 -- # set +x 00:11:37.949 12:03:45 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:37.949 [2024-07-25 12:03:45.193346] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:37.949 [2024-07-25 12:03:45.193375] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x220c630 name Existed_Raid, state configuring 00:11:37.949 12:03:45 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:11:37.949 12:03:45 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:38.207 12:03:45 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:38.466 BaseBdev1 00:11:38.466 12:03:45 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:11:38.466 12:03:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:38.466 12:03:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:38.466 12:03:45 -- common/autotest_common.sh@889 -- # local i 00:11:38.466 12:03:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:38.466 12:03:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:38.466 12:03:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:38.466 12:03:45 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:38.725 [ 00:11:38.725 { 00:11:38.725 "name": "BaseBdev1", 00:11:38.725 "aliases": [ 00:11:38.725 "6fb0c4ce-d156-4c4e-83aa-aa75745f84d8" 00:11:38.725 ], 00:11:38.725 "product_name": "Malloc disk", 00:11:38.725 "block_size": 512, 00:11:38.725 "num_blocks": 65536, 00:11:38.725 "uuid": "6fb0c4ce-d156-4c4e-83aa-aa75745f84d8", 00:11:38.725 "assigned_rate_limits": { 00:11:38.725 "rw_ios_per_sec": 0, 00:11:38.725 "rw_mbytes_per_sec": 0, 00:11:38.725 "r_mbytes_per_sec": 0, 00:11:38.725 "w_mbytes_per_sec": 0 00:11:38.725 }, 00:11:38.725 "claimed": false, 00:11:38.725 "zoned": false, 00:11:38.725 "supported_io_types": { 00:11:38.725 "read": true, 00:11:38.725 "write": true, 00:11:38.725 "unmap": true, 00:11:38.725 "write_zeroes": true, 00:11:38.725 "flush": true, 00:11:38.725 "reset": true, 00:11:38.725 "compare": false, 00:11:38.725 "compare_and_write": false, 00:11:38.725 "abort": true, 00:11:38.725 "nvme_admin": false, 00:11:38.725 "nvme_io": false 00:11:38.725 }, 00:11:38.725 "memory_domains": [ 00:11:38.725 { 00:11:38.725 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.725 "dma_device_type": 2 00:11:38.725 } 00:11:38.725 ], 00:11:38.725 "driver_specific": {} 00:11:38.725 } 00:11:38.725 ] 00:11:38.725 12:03:45 -- common/autotest_common.sh@895 -- # return 0 00:11:38.725 12:03:45 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:38.725 [2024-07-25 12:03:46.032128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:38.725 [2024-07-25 12:03:46.033144] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:38.725 [2024-07-25 12:03:46.033169] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:38.725 [2024-07-25 12:03:46.033176] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:38.725 [2024-07-25 12:03:46.033184] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:38.984 "name": "Existed_Raid", 00:11:38.984 "uuid": "cbac924a-c472-422c-9787-0993f632a22f", 00:11:38.984 "strip_size_kb": 0, 00:11:38.984 "state": "configuring", 00:11:38.984 "raid_level": "raid1", 00:11:38.984 "superblock": true, 00:11:38.984 "num_base_bdevs": 3, 00:11:38.984 "num_base_bdevs_discovered": 1, 00:11:38.984 "num_base_bdevs_operational": 3, 00:11:38.984 "base_bdevs_list": [ 00:11:38.984 { 00:11:38.984 "name": "BaseBdev1", 00:11:38.984 "uuid": "6fb0c4ce-d156-4c4e-83aa-aa75745f84d8", 00:11:38.984 "is_configured": true, 00:11:38.984 "data_offset": 2048, 00:11:38.984 "data_size": 63488 00:11:38.984 }, 00:11:38.984 { 00:11:38.984 "name": "BaseBdev2", 00:11:38.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.984 "is_configured": false, 00:11:38.984 "data_offset": 0, 00:11:38.984 "data_size": 0 00:11:38.984 }, 00:11:38.984 { 00:11:38.984 "name": "BaseBdev3", 00:11:38.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.984 "is_configured": false, 00:11:38.984 "data_offset": 0, 00:11:38.984 "data_size": 0 00:11:38.984 } 00:11:38.984 ] 00:11:38.984 }' 00:11:38.984 12:03:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:38.984 12:03:46 -- common/autotest_common.sh@10 -- # set +x 00:11:39.551 12:03:46 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:39.551 [2024-07-25 12:03:46.840926] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:39.551 BaseBdev2 00:11:39.551 12:03:46 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:11:39.551 12:03:46 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:11:39.551 12:03:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:39.551 12:03:46 -- common/autotest_common.sh@889 -- # local i 00:11:39.551 12:03:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:39.551 12:03:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:39.551 12:03:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.809 12:03:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:40.067 [ 00:11:40.067 { 00:11:40.067 "name": "BaseBdev2", 00:11:40.067 "aliases": [ 00:11:40.067 "17236786-dbd9-4e71-84fb-8522c40eec74" 00:11:40.067 ], 00:11:40.067 "product_name": "Malloc disk", 00:11:40.067 "block_size": 512, 00:11:40.067 "num_blocks": 65536, 00:11:40.067 "uuid": "17236786-dbd9-4e71-84fb-8522c40eec74", 00:11:40.067 "assigned_rate_limits": { 00:11:40.067 "rw_ios_per_sec": 0, 00:11:40.067 "rw_mbytes_per_sec": 0, 00:11:40.067 "r_mbytes_per_sec": 0, 00:11:40.067 "w_mbytes_per_sec": 0 00:11:40.067 }, 00:11:40.067 "claimed": true, 00:11:40.067 "claim_type": "exclusive_write", 00:11:40.067 "zoned": false, 00:11:40.067 "supported_io_types": { 00:11:40.067 "read": true, 00:11:40.067 "write": true, 00:11:40.067 "unmap": true, 00:11:40.067 "write_zeroes": true, 00:11:40.067 "flush": true, 00:11:40.067 "reset": true, 00:11:40.067 "compare": false, 00:11:40.067 "compare_and_write": false, 00:11:40.067 "abort": true, 00:11:40.067 "nvme_admin": false, 00:11:40.067 "nvme_io": false 00:11:40.067 }, 00:11:40.067 "memory_domains": [ 00:11:40.067 { 00:11:40.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.067 "dma_device_type": 2 00:11:40.067 } 00:11:40.067 ], 00:11:40.067 "driver_specific": {} 00:11:40.067 } 00:11:40.067 ] 00:11:40.067 12:03:47 -- common/autotest_common.sh@895 -- # return 0 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:40.067 12:03:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:40.326 12:03:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:40.326 "name": "Existed_Raid", 00:11:40.326 "uuid": "cbac924a-c472-422c-9787-0993f632a22f", 00:11:40.326 "strip_size_kb": 0, 00:11:40.326 "state": "configuring", 00:11:40.326 "raid_level": "raid1", 00:11:40.326 "superblock": true, 00:11:40.326 "num_base_bdevs": 3, 00:11:40.326 "num_base_bdevs_discovered": 2, 00:11:40.326 "num_base_bdevs_operational": 3, 00:11:40.326 "base_bdevs_list": [ 00:11:40.326 { 00:11:40.326 "name": "BaseBdev1", 00:11:40.326 "uuid": "6fb0c4ce-d156-4c4e-83aa-aa75745f84d8", 00:11:40.326 "is_configured": true, 00:11:40.326 "data_offset": 2048, 00:11:40.326 "data_size": 63488 00:11:40.326 }, 00:11:40.326 { 00:11:40.326 "name": "BaseBdev2", 00:11:40.326 "uuid": "17236786-dbd9-4e71-84fb-8522c40eec74", 00:11:40.326 "is_configured": true, 00:11:40.326 "data_offset": 2048, 00:11:40.326 "data_size": 63488 00:11:40.326 }, 00:11:40.326 { 00:11:40.326 "name": "BaseBdev3", 00:11:40.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.326 "is_configured": false, 00:11:40.326 "data_offset": 0, 00:11:40.326 "data_size": 0 00:11:40.326 } 00:11:40.326 ] 00:11:40.326 }' 00:11:40.326 12:03:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:40.326 12:03:47 -- common/autotest_common.sh@10 -- # set +x 00:11:40.585 12:03:47 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:40.844 [2024-07-25 12:03:48.014836] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:40.844 [2024-07-25 12:03:48.014952] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x23ad180 00:11:40.844 [2024-07-25 12:03:48.014961] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.844 [2024-07-25 12:03:48.015075] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x23af370 00:11:40.844 [2024-07-25 12:03:48.015157] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x23ad180 00:11:40.844 [2024-07-25 12:03:48.015163] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x23ad180 00:11:40.844 [2024-07-25 12:03:48.015227] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.844 BaseBdev3 00:11:40.844 12:03:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:11:40.844 12:03:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:11:40.844 12:03:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:40.844 12:03:48 -- common/autotest_common.sh@889 -- # local i 00:11:40.844 12:03:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:40.844 12:03:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:40.844 12:03:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:41.102 12:03:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:41.102 [ 00:11:41.102 { 00:11:41.102 "name": "BaseBdev3", 00:11:41.102 "aliases": [ 00:11:41.102 "099d640b-4aa5-4a69-81da-0185efcc185a" 00:11:41.102 ], 00:11:41.102 "product_name": "Malloc disk", 00:11:41.102 "block_size": 512, 00:11:41.102 "num_blocks": 65536, 00:11:41.102 "uuid": "099d640b-4aa5-4a69-81da-0185efcc185a", 00:11:41.102 "assigned_rate_limits": { 00:11:41.102 "rw_ios_per_sec": 0, 00:11:41.102 "rw_mbytes_per_sec": 0, 00:11:41.102 "r_mbytes_per_sec": 0, 00:11:41.102 "w_mbytes_per_sec": 0 00:11:41.102 }, 00:11:41.102 "claimed": true, 00:11:41.102 "claim_type": "exclusive_write", 00:11:41.102 "zoned": false, 00:11:41.102 "supported_io_types": { 00:11:41.102 "read": true, 00:11:41.102 "write": true, 00:11:41.102 "unmap": true, 00:11:41.102 "write_zeroes": true, 00:11:41.102 "flush": true, 00:11:41.102 "reset": true, 00:11:41.102 "compare": false, 00:11:41.102 "compare_and_write": false, 00:11:41.102 "abort": true, 00:11:41.102 "nvme_admin": false, 00:11:41.102 "nvme_io": false 00:11:41.102 }, 00:11:41.102 "memory_domains": [ 00:11:41.102 { 00:11:41.102 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.102 "dma_device_type": 2 00:11:41.102 } 00:11:41.102 ], 00:11:41.102 "driver_specific": {} 00:11:41.102 } 00:11:41.102 ] 00:11:41.102 12:03:48 -- common/autotest_common.sh@895 -- # return 0 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.102 12:03:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.361 12:03:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:41.361 "name": "Existed_Raid", 00:11:41.361 "uuid": "cbac924a-c472-422c-9787-0993f632a22f", 00:11:41.361 "strip_size_kb": 0, 00:11:41.361 "state": "online", 00:11:41.361 "raid_level": "raid1", 00:11:41.361 "superblock": true, 00:11:41.361 "num_base_bdevs": 3, 00:11:41.361 "num_base_bdevs_discovered": 3, 00:11:41.361 "num_base_bdevs_operational": 3, 00:11:41.361 "base_bdevs_list": [ 00:11:41.361 { 00:11:41.361 "name": "BaseBdev1", 00:11:41.361 "uuid": "6fb0c4ce-d156-4c4e-83aa-aa75745f84d8", 00:11:41.361 "is_configured": true, 00:11:41.361 "data_offset": 2048, 00:11:41.361 "data_size": 63488 00:11:41.361 }, 00:11:41.361 { 00:11:41.361 "name": "BaseBdev2", 00:11:41.361 "uuid": "17236786-dbd9-4e71-84fb-8522c40eec74", 00:11:41.361 "is_configured": true, 00:11:41.361 "data_offset": 2048, 00:11:41.361 "data_size": 63488 00:11:41.361 }, 00:11:41.361 { 00:11:41.361 "name": "BaseBdev3", 00:11:41.361 "uuid": "099d640b-4aa5-4a69-81da-0185efcc185a", 00:11:41.361 "is_configured": true, 00:11:41.361 "data_offset": 2048, 00:11:41.361 "data_size": 63488 00:11:41.361 } 00:11:41.361 ] 00:11:41.361 }' 00:11:41.361 12:03:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:41.361 12:03:48 -- common/autotest_common.sh@10 -- # set +x 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:41.929 [2024-07-25 12:03:49.177866] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.929 12:03:49 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.188 12:03:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:42.188 "name": "Existed_Raid", 00:11:42.188 "uuid": "cbac924a-c472-422c-9787-0993f632a22f", 00:11:42.188 "strip_size_kb": 0, 00:11:42.188 "state": "online", 00:11:42.188 "raid_level": "raid1", 00:11:42.188 "superblock": true, 00:11:42.188 "num_base_bdevs": 3, 00:11:42.188 "num_base_bdevs_discovered": 2, 00:11:42.188 "num_base_bdevs_operational": 2, 00:11:42.188 "base_bdevs_list": [ 00:11:42.188 { 00:11:42.188 "name": null, 00:11:42.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.188 "is_configured": false, 00:11:42.188 "data_offset": 2048, 00:11:42.188 "data_size": 63488 00:11:42.188 }, 00:11:42.188 { 00:11:42.188 "name": "BaseBdev2", 00:11:42.188 "uuid": "17236786-dbd9-4e71-84fb-8522c40eec74", 00:11:42.188 "is_configured": true, 00:11:42.188 "data_offset": 2048, 00:11:42.188 "data_size": 63488 00:11:42.188 }, 00:11:42.188 { 00:11:42.188 "name": "BaseBdev3", 00:11:42.188 "uuid": "099d640b-4aa5-4a69-81da-0185efcc185a", 00:11:42.188 "is_configured": true, 00:11:42.188 "data_offset": 2048, 00:11:42.188 "data_size": 63488 00:11:42.188 } 00:11:42.188 ] 00:11:42.188 }' 00:11:42.188 12:03:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:42.188 12:03:49 -- common/autotest_common.sh@10 -- # set +x 00:11:42.755 12:03:49 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:11:42.755 12:03:49 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:42.755 12:03:49 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:42.755 12:03:49 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.755 12:03:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:42.755 12:03:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.755 12:03:50 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:43.013 [2024-07-25 12:03:50.197322] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:43.013 12:03:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:43.013 12:03:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:43.013 12:03:50 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.013 12:03:50 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:43.272 [2024-07-25 12:03:50.539837] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:43.272 [2024-07-25 12:03:50.539859] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.272 [2024-07-25 12:03:50.539886] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.272 [2024-07-25 12:03:50.551785] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.272 [2024-07-25 12:03:50.551806] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x23ad180 name Existed_Raid, state offline 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.272 12:03:50 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:11:43.531 12:03:50 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:11:43.531 12:03:50 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:11:43.531 12:03:50 -- bdev/bdev_raid.sh@287 -- # killprocess 1236257 00:11:43.531 12:03:50 -- common/autotest_common.sh@926 -- # '[' -z 1236257 ']' 00:11:43.531 12:03:50 -- common/autotest_common.sh@930 -- # kill -0 1236257 00:11:43.531 12:03:50 -- common/autotest_common.sh@931 -- # uname 00:11:43.531 12:03:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:43.531 12:03:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1236257 00:11:43.531 12:03:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:43.531 12:03:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:43.531 12:03:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1236257' 00:11:43.531 killing process with pid 1236257 00:11:43.531 12:03:50 -- common/autotest_common.sh@945 -- # kill 1236257 00:11:43.531 [2024-07-25 12:03:50.762715] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.531 12:03:50 -- common/autotest_common.sh@950 -- # wait 1236257 00:11:43.531 [2024-07-25 12:03:50.763622] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:43.790 12:03:50 -- bdev/bdev_raid.sh@289 -- # return 0 00:11:43.790 00:11:43.790 real 0m9.107s 00:11:43.790 user 0m15.941s 00:11:43.790 sys 0m1.810s 00:11:43.790 12:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.790 12:03:50 -- common/autotest_common.sh@10 -- # set +x 00:11:43.790 ************************************ 00:11:43.790 END TEST raid_state_function_test_sb 00:11:43.790 ************************************ 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:43.790 12:03:51 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:43.790 12:03:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:43.790 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:43.790 ************************************ 00:11:43.790 START TEST raid_superblock_test 00:11:43.790 ************************************ 00:11:43.790 12:03:51 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 3 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=3 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@357 -- # raid_pid=1237641 00:11:43.790 12:03:51 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1237641 /var/tmp/spdk-raid.sock 00:11:43.790 12:03:51 -- common/autotest_common.sh@819 -- # '[' -z 1237641 ']' 00:11:43.790 12:03:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:43.790 12:03:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.790 12:03:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:43.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:43.790 12:03:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.790 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:11:43.790 [2024-07-25 12:03:51.063332] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:43.790 [2024-07-25 12:03:51.063379] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1237641 ] 00:11:44.048 [2024-07-25 12:03:51.151887] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.048 [2024-07-25 12:03:51.238328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.048 [2024-07-25 12:03:51.294522] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.048 [2024-07-25 12:03:51.294552] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:44.615 12:03:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.615 12:03:51 -- common/autotest_common.sh@852 -- # return 0 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:44.615 12:03:51 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:44.873 malloc1 00:11:44.873 12:03:52 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:45.131 [2024-07-25 12:03:52.197635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:45.131 [2024-07-25 12:03:52.197673] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.131 [2024-07-25 12:03:52.197693] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15d88d0 00:11:45.131 [2024-07-25 12:03:52.197702] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.131 [2024-07-25 12:03:52.198939] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.131 [2024-07-25 12:03:52.198961] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:45.131 pt1 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:45.131 malloc2 00:11:45.131 12:03:52 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:45.389 [2024-07-25 12:03:52.518383] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:45.389 [2024-07-25 12:03:52.518420] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.389 [2024-07-25 12:03:52.518435] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17801a0 00:11:45.389 [2024-07-25 12:03:52.518443] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.389 [2024-07-25 12:03:52.519577] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.389 [2024-07-25 12:03:52.519598] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:45.389 pt2 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:45.389 12:03:52 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:45.389 malloc3 00:11:45.648 12:03:52 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:45.648 [2024-07-25 12:03:52.847931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:45.648 [2024-07-25 12:03:52.847967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:45.648 [2024-07-25 12:03:52.847993] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1780700 00:11:45.648 [2024-07-25 12:03:52.848002] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:45.648 [2024-07-25 12:03:52.849141] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:45.648 [2024-07-25 12:03:52.849163] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:45.648 pt3 00:11:45.648 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:11:45.648 12:03:52 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:11:45.648 12:03:52 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:45.906 [2024-07-25 12:03:53.016384] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:45.906 [2024-07-25 12:03:53.017338] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:45.906 [2024-07-25 12:03:53.017374] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:45.906 [2024-07-25 12:03:53.017488] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1783cf0 00:11:45.906 [2024-07-25 12:03:53.017495] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:45.906 [2024-07-25 12:03:53.017629] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1780990 00:11:45.906 [2024-07-25 12:03:53.017721] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1783cf0 00:11:45.906 [2024-07-25 12:03:53.017727] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1783cf0 00:11:45.906 [2024-07-25 12:03:53.017793] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:45.906 12:03:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:45.906 "name": "raid_bdev1", 00:11:45.906 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:45.906 "strip_size_kb": 0, 00:11:45.906 "state": "online", 00:11:45.906 "raid_level": "raid1", 00:11:45.906 "superblock": true, 00:11:45.906 "num_base_bdevs": 3, 00:11:45.906 "num_base_bdevs_discovered": 3, 00:11:45.906 "num_base_bdevs_operational": 3, 00:11:45.906 "base_bdevs_list": [ 00:11:45.906 { 00:11:45.906 "name": "pt1", 00:11:45.906 "uuid": "2f3b0e07-fe7b-5624-9d61-292c11dfcaf5", 00:11:45.906 "is_configured": true, 00:11:45.906 "data_offset": 2048, 00:11:45.906 "data_size": 63488 00:11:45.906 }, 00:11:45.906 { 00:11:45.906 "name": "pt2", 00:11:45.906 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:45.906 "is_configured": true, 00:11:45.907 "data_offset": 2048, 00:11:45.907 "data_size": 63488 00:11:45.907 }, 00:11:45.907 { 00:11:45.907 "name": "pt3", 00:11:45.907 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:45.907 "is_configured": true, 00:11:45.907 "data_offset": 2048, 00:11:45.907 "data_size": 63488 00:11:45.907 } 00:11:45.907 ] 00:11:45.907 }' 00:11:45.907 12:03:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:45.907 12:03:53 -- common/autotest_common.sh@10 -- # set +x 00:11:46.474 12:03:53 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:11:46.474 12:03:53 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:46.733 [2024-07-25 12:03:53.822583] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:46.733 12:03:53 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=043c8563-2197-4fc5-a67c-5711d0198fc0 00:11:46.733 12:03:53 -- bdev/bdev_raid.sh@380 -- # '[' -z 043c8563-2197-4fc5-a67c-5711d0198fc0 ']' 00:11:46.733 12:03:53 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:46.733 [2024-07-25 12:03:53.990856] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:46.733 [2024-07-25 12:03:53.990872] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:46.733 [2024-07-25 12:03:53.990907] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:46.733 [2024-07-25 12:03:53.990953] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:46.733 [2024-07-25 12:03:53.990961] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1783cf0 name raid_bdev1, state offline 00:11:46.733 12:03:54 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.733 12:03:54 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:11:46.991 12:03:54 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:11:46.992 12:03:54 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:11:46.992 12:03:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:46.992 12:03:54 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:47.250 12:03:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.250 12:03:54 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:47.250 12:03:54 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:11:47.250 12:03:54 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:47.509 12:03:54 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:47.509 12:03:54 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:47.768 12:03:54 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:11:47.768 12:03:54 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:47.768 12:03:54 -- common/autotest_common.sh@640 -- # local es=0 00:11:47.768 12:03:54 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:47.768 12:03:54 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.768 12:03:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:47.768 12:03:54 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.768 12:03:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:47.768 12:03:54 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.768 12:03:54 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:47.768 12:03:54 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:11:47.768 12:03:54 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:11:47.768 12:03:54 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:47.768 [2024-07-25 12:03:54.993415] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:47.768 [2024-07-25 12:03:54.994417] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:47.768 [2024-07-25 12:03:54.994444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:47.768 [2024-07-25 12:03:54.994477] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:11:47.768 [2024-07-25 12:03:54.994505] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:11:47.768 [2024-07-25 12:03:54.994519] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:11:47.768 [2024-07-25 12:03:54.994530] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:47.768 [2024-07-25 12:03:54.994537] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d91c0 name raid_bdev1, state configuring 00:11:47.768 request: 00:11:47.768 { 00:11:47.768 "name": "raid_bdev1", 00:11:47.768 "raid_level": "raid1", 00:11:47.768 "base_bdevs": [ 00:11:47.768 "malloc1", 00:11:47.768 "malloc2", 00:11:47.768 "malloc3" 00:11:47.768 ], 00:11:47.768 "superblock": false, 00:11:47.768 "method": "bdev_raid_create", 00:11:47.768 "req_id": 1 00:11:47.768 } 00:11:47.768 Got JSON-RPC error response 00:11:47.768 response: 00:11:47.768 { 00:11:47.768 "code": -17, 00:11:47.768 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:47.768 } 00:11:47.768 12:03:55 -- common/autotest_common.sh@643 -- # es=1 00:11:47.768 12:03:55 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:47.768 12:03:55 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:47.768 12:03:55 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:47.768 12:03:55 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.768 12:03:55 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:11:48.027 12:03:55 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:11:48.027 12:03:55 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:11:48.027 12:03:55 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:48.027 [2024-07-25 12:03:55.322237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:48.027 [2024-07-25 12:03:55.322281] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.027 [2024-07-25 12:03:55.322298] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17817c0 00:11:48.027 [2024-07-25 12:03:55.322307] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.027 [2024-07-25 12:03:55.323490] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.027 [2024-07-25 12:03:55.323512] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:48.027 [2024-07-25 12:03:55.323564] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:48.027 [2024-07-25 12:03:55.323583] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:48.027 pt1 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:48.286 "name": "raid_bdev1", 00:11:48.286 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:48.286 "strip_size_kb": 0, 00:11:48.286 "state": "configuring", 00:11:48.286 "raid_level": "raid1", 00:11:48.286 "superblock": true, 00:11:48.286 "num_base_bdevs": 3, 00:11:48.286 "num_base_bdevs_discovered": 1, 00:11:48.286 "num_base_bdevs_operational": 3, 00:11:48.286 "base_bdevs_list": [ 00:11:48.286 { 00:11:48.286 "name": "pt1", 00:11:48.286 "uuid": "2f3b0e07-fe7b-5624-9d61-292c11dfcaf5", 00:11:48.286 "is_configured": true, 00:11:48.286 "data_offset": 2048, 00:11:48.286 "data_size": 63488 00:11:48.286 }, 00:11:48.286 { 00:11:48.286 "name": null, 00:11:48.286 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:48.286 "is_configured": false, 00:11:48.286 "data_offset": 2048, 00:11:48.286 "data_size": 63488 00:11:48.286 }, 00:11:48.286 { 00:11:48.286 "name": null, 00:11:48.286 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:48.286 "is_configured": false, 00:11:48.286 "data_offset": 2048, 00:11:48.286 "data_size": 63488 00:11:48.286 } 00:11:48.286 ] 00:11:48.286 }' 00:11:48.286 12:03:55 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:48.286 12:03:55 -- common/autotest_common.sh@10 -- # set +x 00:11:48.853 12:03:55 -- bdev/bdev_raid.sh@414 -- # '[' 3 -gt 2 ']' 00:11:48.853 12:03:55 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:48.853 [2024-07-25 12:03:56.096244] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:48.853 [2024-07-25 12:03:56.096284] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:48.853 [2024-07-25 12:03:56.096301] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1782bc0 00:11:48.853 [2024-07-25 12:03:56.096309] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:48.853 [2024-07-25 12:03:56.096546] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:48.853 [2024-07-25 12:03:56.096557] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:48.853 [2024-07-25 12:03:56.096600] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:48.853 [2024-07-25 12:03:56.096614] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:48.853 pt2 00:11:48.853 12:03:56 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:49.124 [2024-07-25 12:03:56.268699] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:49.124 12:03:56 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.405 12:03:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:49.405 "name": "raid_bdev1", 00:11:49.405 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:49.405 "strip_size_kb": 0, 00:11:49.405 "state": "configuring", 00:11:49.405 "raid_level": "raid1", 00:11:49.405 "superblock": true, 00:11:49.405 "num_base_bdevs": 3, 00:11:49.405 "num_base_bdevs_discovered": 1, 00:11:49.405 "num_base_bdevs_operational": 3, 00:11:49.405 "base_bdevs_list": [ 00:11:49.405 { 00:11:49.405 "name": "pt1", 00:11:49.405 "uuid": "2f3b0e07-fe7b-5624-9d61-292c11dfcaf5", 00:11:49.405 "is_configured": true, 00:11:49.405 "data_offset": 2048, 00:11:49.405 "data_size": 63488 00:11:49.405 }, 00:11:49.405 { 00:11:49.405 "name": null, 00:11:49.405 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:49.405 "is_configured": false, 00:11:49.405 "data_offset": 2048, 00:11:49.405 "data_size": 63488 00:11:49.405 }, 00:11:49.405 { 00:11:49.405 "name": null, 00:11:49.405 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:49.405 "is_configured": false, 00:11:49.405 "data_offset": 2048, 00:11:49.405 "data_size": 63488 00:11:49.405 } 00:11:49.405 ] 00:11:49.405 }' 00:11:49.405 12:03:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:49.405 12:03:56 -- common/autotest_common.sh@10 -- # set +x 00:11:49.663 12:03:56 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:11:49.663 12:03:56 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:49.663 12:03:56 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:49.921 [2024-07-25 12:03:57.114879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:49.921 [2024-07-25 12:03:57.114920] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:49.921 [2024-07-25 12:03:57.114939] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17803d0 00:11:49.921 [2024-07-25 12:03:57.114948] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:49.921 [2024-07-25 12:03:57.115185] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:49.921 [2024-07-25 12:03:57.115196] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:49.921 [2024-07-25 12:03:57.115238] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:49.921 [2024-07-25 12:03:57.115252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:49.921 pt2 00:11:49.921 12:03:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:49.921 12:03:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:49.921 12:03:57 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:50.179 [2024-07-25 12:03:57.287322] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:50.179 [2024-07-25 12:03:57.287344] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:50.179 [2024-07-25 12:03:57.287357] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x17860a0 00:11:50.179 [2024-07-25 12:03:57.287366] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:50.179 [2024-07-25 12:03:57.287567] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:50.179 [2024-07-25 12:03:57.287579] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:50.179 [2024-07-25 12:03:57.287612] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:50.179 [2024-07-25 12:03:57.287624] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:50.179 [2024-07-25 12:03:57.287691] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x15d7170 00:11:50.179 [2024-07-25 12:03:57.287697] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:50.179 [2024-07-25 12:03:57.287804] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1780990 00:11:50.179 [2024-07-25 12:03:57.287885] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15d7170 00:11:50.179 [2024-07-25 12:03:57.287891] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x15d7170 00:11:50.179 [2024-07-25 12:03:57.287951] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.179 pt3 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:50.179 "name": "raid_bdev1", 00:11:50.179 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:50.179 "strip_size_kb": 0, 00:11:50.179 "state": "online", 00:11:50.179 "raid_level": "raid1", 00:11:50.179 "superblock": true, 00:11:50.179 "num_base_bdevs": 3, 00:11:50.179 "num_base_bdevs_discovered": 3, 00:11:50.179 "num_base_bdevs_operational": 3, 00:11:50.179 "base_bdevs_list": [ 00:11:50.179 { 00:11:50.179 "name": "pt1", 00:11:50.179 "uuid": "2f3b0e07-fe7b-5624-9d61-292c11dfcaf5", 00:11:50.179 "is_configured": true, 00:11:50.179 "data_offset": 2048, 00:11:50.179 "data_size": 63488 00:11:50.179 }, 00:11:50.179 { 00:11:50.179 "name": "pt2", 00:11:50.179 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:50.179 "is_configured": true, 00:11:50.179 "data_offset": 2048, 00:11:50.179 "data_size": 63488 00:11:50.179 }, 00:11:50.179 { 00:11:50.179 "name": "pt3", 00:11:50.179 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:50.179 "is_configured": true, 00:11:50.179 "data_offset": 2048, 00:11:50.179 "data_size": 63488 00:11:50.179 } 00:11:50.179 ] 00:11:50.179 }' 00:11:50.179 12:03:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:50.179 12:03:57 -- common/autotest_common.sh@10 -- # set +x 00:11:50.745 12:03:57 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:11:50.745 12:03:57 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:51.003 [2024-07-25 12:03:58.121621] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@430 -- # '[' 043c8563-2197-4fc5-a67c-5711d0198fc0 '!=' 043c8563-2197-4fc5-a67c-5711d0198fc0 ']' 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@196 -- # return 0 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@436 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:51.003 [2024-07-25 12:03:58.289919] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.003 12:03:58 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.261 12:03:58 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:51.261 "name": "raid_bdev1", 00:11:51.261 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:51.261 "strip_size_kb": 0, 00:11:51.261 "state": "online", 00:11:51.261 "raid_level": "raid1", 00:11:51.261 "superblock": true, 00:11:51.261 "num_base_bdevs": 3, 00:11:51.261 "num_base_bdevs_discovered": 2, 00:11:51.261 "num_base_bdevs_operational": 2, 00:11:51.261 "base_bdevs_list": [ 00:11:51.261 { 00:11:51.261 "name": null, 00:11:51.261 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.261 "is_configured": false, 00:11:51.261 "data_offset": 2048, 00:11:51.261 "data_size": 63488 00:11:51.261 }, 00:11:51.261 { 00:11:51.261 "name": "pt2", 00:11:51.261 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:51.261 "is_configured": true, 00:11:51.261 "data_offset": 2048, 00:11:51.261 "data_size": 63488 00:11:51.261 }, 00:11:51.261 { 00:11:51.261 "name": "pt3", 00:11:51.261 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:51.261 "is_configured": true, 00:11:51.261 "data_offset": 2048, 00:11:51.261 "data_size": 63488 00:11:51.261 } 00:11:51.261 ] 00:11:51.261 }' 00:11:51.261 12:03:58 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:51.261 12:03:58 -- common/autotest_common.sh@10 -- # set +x 00:11:51.826 12:03:58 -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:51.826 [2024-07-25 12:03:59.104012] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:51.826 [2024-07-25 12:03:59.104038] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.826 [2024-07-25 12:03:59.104072] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.826 [2024-07-25 12:03:59.104107] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.826 [2024-07-25 12:03:59.104115] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15d7170 name raid_bdev1, state offline 00:11:51.826 12:03:59 -- bdev/bdev_raid.sh@443 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.826 12:03:59 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:11:52.084 12:03:59 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:11:52.084 12:03:59 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:11:52.084 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:11:52.084 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:52.084 12:03:59 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:52.343 12:03:59 -- bdev/bdev_raid.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:52.601 [2024-07-25 12:03:59.789756] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:52.601 [2024-07-25 12:03:59.789790] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:52.601 [2024-07-25 12:03:59.789806] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1782770 00:11:52.601 [2024-07-25 12:03:59.789814] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:52.601 [2024-07-25 12:03:59.790962] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:52.601 [2024-07-25 12:03:59.790982] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:52.601 [2024-07-25 12:03:59.791024] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:52.601 [2024-07-25 12:03:59.791042] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:52.601 pt2 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.601 12:03:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:52.859 12:03:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:52.859 "name": "raid_bdev1", 00:11:52.859 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:52.859 "strip_size_kb": 0, 00:11:52.859 "state": "configuring", 00:11:52.859 "raid_level": "raid1", 00:11:52.859 "superblock": true, 00:11:52.859 "num_base_bdevs": 3, 00:11:52.859 "num_base_bdevs_discovered": 1, 00:11:52.859 "num_base_bdevs_operational": 2, 00:11:52.859 "base_bdevs_list": [ 00:11:52.859 { 00:11:52.859 "name": null, 00:11:52.859 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:52.859 "is_configured": false, 00:11:52.859 "data_offset": 2048, 00:11:52.859 "data_size": 63488 00:11:52.859 }, 00:11:52.859 { 00:11:52.859 "name": "pt2", 00:11:52.859 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:52.859 "is_configured": true, 00:11:52.859 "data_offset": 2048, 00:11:52.859 "data_size": 63488 00:11:52.859 }, 00:11:52.859 { 00:11:52.859 "name": null, 00:11:52.859 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:52.859 "is_configured": false, 00:11:52.859 "data_offset": 2048, 00:11:52.859 "data_size": 63488 00:11:52.859 } 00:11:52.859 ] 00:11:52.859 }' 00:11:52.859 12:03:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:52.859 12:03:59 -- common/autotest_common.sh@10 -- # set +x 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@462 -- # i=2 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@463 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:53.423 [2024-07-25 12:04:00.615903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:53.423 [2024-07-25 12:04:00.615940] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:53.423 [2024-07-25 12:04:00.615957] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177f060 00:11:53.423 [2024-07-25 12:04:00.615966] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:53.423 [2024-07-25 12:04:00.616205] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:53.423 [2024-07-25 12:04:00.616218] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:53.423 [2024-07-25 12:04:00.616262] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:53.423 [2024-07-25 12:04:00.616285] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:53.423 [2024-07-25 12:04:00.616352] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x177f320 00:11:53.423 [2024-07-25 12:04:00.616359] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:53.423 [2024-07-25 12:04:00.616470] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1782b60 00:11:53.423 [2024-07-25 12:04:00.616554] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x177f320 00:11:53.423 [2024-07-25 12:04:00.616560] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x177f320 00:11:53.423 [2024-07-25 12:04:00.616626] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.423 pt3 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.423 12:04:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.681 12:04:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:53.681 "name": "raid_bdev1", 00:11:53.681 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:53.681 "strip_size_kb": 0, 00:11:53.681 "state": "online", 00:11:53.681 "raid_level": "raid1", 00:11:53.681 "superblock": true, 00:11:53.681 "num_base_bdevs": 3, 00:11:53.681 "num_base_bdevs_discovered": 2, 00:11:53.681 "num_base_bdevs_operational": 2, 00:11:53.681 "base_bdevs_list": [ 00:11:53.681 { 00:11:53.681 "name": null, 00:11:53.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.681 "is_configured": false, 00:11:53.681 "data_offset": 2048, 00:11:53.681 "data_size": 63488 00:11:53.681 }, 00:11:53.681 { 00:11:53.681 "name": "pt2", 00:11:53.681 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:53.681 "is_configured": true, 00:11:53.681 "data_offset": 2048, 00:11:53.681 "data_size": 63488 00:11:53.681 }, 00:11:53.681 { 00:11:53.681 "name": "pt3", 00:11:53.681 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:53.681 "is_configured": true, 00:11:53.681 "data_offset": 2048, 00:11:53.681 "data_size": 63488 00:11:53.681 } 00:11:53.681 ] 00:11:53.681 }' 00:11:53.681 12:04:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:53.681 12:04:00 -- common/autotest_common.sh@10 -- # set +x 00:11:54.246 12:04:01 -- bdev/bdev_raid.sh@468 -- # '[' 3 -gt 2 ']' 00:11:54.246 12:04:01 -- bdev/bdev_raid.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:54.246 [2024-07-25 12:04:01.446035] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:54.246 [2024-07-25 12:04:01.446053] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:54.246 [2024-07-25 12:04:01.446089] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:54.246 [2024-07-25 12:04:01.446127] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:54.246 [2024-07-25 12:04:01.446135] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x177f320 name raid_bdev1, state offline 00:11:54.246 12:04:01 -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.246 12:04:01 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:54.503 [2024-07-25 12:04:01.782898] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:54.503 [2024-07-25 12:04:01.782932] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:54.503 [2024-07-25 12:04:01.782949] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x15d79c0 00:11:54.503 [2024-07-25 12:04:01.782958] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:54.503 [2024-07-25 12:04:01.784140] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:54.503 [2024-07-25 12:04:01.784162] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:54.503 [2024-07-25 12:04:01.784209] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:11:54.503 [2024-07-25 12:04:01.784227] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:54.503 pt1 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:54.503 12:04:01 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:54.761 12:04:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:54.761 "name": "raid_bdev1", 00:11:54.761 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:54.761 "strip_size_kb": 0, 00:11:54.761 "state": "configuring", 00:11:54.761 "raid_level": "raid1", 00:11:54.761 "superblock": true, 00:11:54.761 "num_base_bdevs": 3, 00:11:54.761 "num_base_bdevs_discovered": 1, 00:11:54.761 "num_base_bdevs_operational": 3, 00:11:54.761 "base_bdevs_list": [ 00:11:54.761 { 00:11:54.761 "name": "pt1", 00:11:54.761 "uuid": "2f3b0e07-fe7b-5624-9d61-292c11dfcaf5", 00:11:54.761 "is_configured": true, 00:11:54.761 "data_offset": 2048, 00:11:54.761 "data_size": 63488 00:11:54.761 }, 00:11:54.761 { 00:11:54.761 "name": null, 00:11:54.761 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:54.761 "is_configured": false, 00:11:54.761 "data_offset": 2048, 00:11:54.761 "data_size": 63488 00:11:54.761 }, 00:11:54.761 { 00:11:54.761 "name": null, 00:11:54.761 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:54.761 "is_configured": false, 00:11:54.761 "data_offset": 2048, 00:11:54.761 "data_size": 63488 00:11:54.761 } 00:11:54.761 ] 00:11:54.761 }' 00:11:54.761 12:04:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:54.761 12:04:01 -- common/autotest_common.sh@10 -- # set +x 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@485 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:55.326 12:04:02 -- bdev/bdev_raid.sh@485 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:55.583 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:11:55.583 12:04:02 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:11:55.583 12:04:02 -- bdev/bdev_raid.sh@489 -- # i=2 00:11:55.583 12:04:02 -- bdev/bdev_raid.sh@490 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:55.841 [2024-07-25 12:04:02.929860] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:55.841 [2024-07-25 12:04:02.929893] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:55.841 [2024-07-25 12:04:02.929908] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x177f060 00:11:55.841 [2024-07-25 12:04:02.929916] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:55.841 [2024-07-25 12:04:02.930153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:55.841 [2024-07-25 12:04:02.930165] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:55.841 [2024-07-25 12:04:02.930207] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:11:55.841 [2024-07-25 12:04:02.930215] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt3 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:55.841 [2024-07-25 12:04:02.930222] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:55.841 [2024-07-25 12:04:02.930231] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17743c0 name raid_bdev1, state configuring 00:11:55.841 [2024-07-25 12:04:02.930252] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:55.841 pt3 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:55.841 12:04:02 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:55.842 12:04:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:55.842 12:04:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:55.842 "name": "raid_bdev1", 00:11:55.842 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:55.842 "strip_size_kb": 0, 00:11:55.842 "state": "configuring", 00:11:55.842 "raid_level": "raid1", 00:11:55.842 "superblock": true, 00:11:55.842 "num_base_bdevs": 3, 00:11:55.842 "num_base_bdevs_discovered": 1, 00:11:55.842 "num_base_bdevs_operational": 2, 00:11:55.842 "base_bdevs_list": [ 00:11:55.842 { 00:11:55.842 "name": null, 00:11:55.842 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.842 "is_configured": false, 00:11:55.842 "data_offset": 2048, 00:11:55.842 "data_size": 63488 00:11:55.842 }, 00:11:55.842 { 00:11:55.842 "name": null, 00:11:55.842 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:55.842 "is_configured": false, 00:11:55.842 "data_offset": 2048, 00:11:55.842 "data_size": 63488 00:11:55.842 }, 00:11:55.842 { 00:11:55.842 "name": "pt3", 00:11:55.842 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:55.842 "is_configured": true, 00:11:55.842 "data_offset": 2048, 00:11:55.842 "data_size": 63488 00:11:55.842 } 00:11:55.842 ] 00:11:55.842 }' 00:11:55.842 12:04:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:55.842 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:56.407 [2024-07-25 12:04:03.691849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:56.407 [2024-07-25 12:04:03.691887] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:56.407 [2024-07-25 12:04:03.691903] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1784500 00:11:56.407 [2024-07-25 12:04:03.691911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:56.407 [2024-07-25 12:04:03.692144] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:56.407 [2024-07-25 12:04:03.692157] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:56.407 [2024-07-25 12:04:03.692202] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:11:56.407 [2024-07-25 12:04:03.692214] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:56.407 [2024-07-25 12:04:03.692285] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x17862c0 00:11:56.407 [2024-07-25 12:04:03.692292] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:56.407 [2024-07-25 12:04:03.692404] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x177f2f0 00:11:56.407 [2024-07-25 12:04:03.692486] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x17862c0 00:11:56.407 [2024-07-25 12:04:03.692492] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x17862c0 00:11:56.407 [2024-07-25 12:04:03.692555] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:56.407 pt2 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.407 12:04:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:56.665 12:04:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:56.665 "name": "raid_bdev1", 00:11:56.665 "uuid": "043c8563-2197-4fc5-a67c-5711d0198fc0", 00:11:56.665 "strip_size_kb": 0, 00:11:56.665 "state": "online", 00:11:56.665 "raid_level": "raid1", 00:11:56.665 "superblock": true, 00:11:56.665 "num_base_bdevs": 3, 00:11:56.665 "num_base_bdevs_discovered": 2, 00:11:56.665 "num_base_bdevs_operational": 2, 00:11:56.665 "base_bdevs_list": [ 00:11:56.665 { 00:11:56.665 "name": null, 00:11:56.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.665 "is_configured": false, 00:11:56.665 "data_offset": 2048, 00:11:56.665 "data_size": 63488 00:11:56.665 }, 00:11:56.666 { 00:11:56.666 "name": "pt2", 00:11:56.666 "uuid": "74f9b716-bddd-5898-b51c-e79c03015c83", 00:11:56.666 "is_configured": true, 00:11:56.666 "data_offset": 2048, 00:11:56.666 "data_size": 63488 00:11:56.666 }, 00:11:56.666 { 00:11:56.666 "name": "pt3", 00:11:56.666 "uuid": "fea7d9ca-89c0-58c5-91cc-ccdc5cc939bb", 00:11:56.666 "is_configured": true, 00:11:56.666 "data_offset": 2048, 00:11:56.666 "data_size": 63488 00:11:56.666 } 00:11:56.666 ] 00:11:56.666 }' 00:11:56.666 12:04:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:56.666 12:04:03 -- common/autotest_common.sh@10 -- # set +x 00:11:57.232 12:04:04 -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:57.232 12:04:04 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:11:57.232 [2024-07-25 12:04:04.514091] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.232 12:04:04 -- bdev/bdev_raid.sh@506 -- # '[' 043c8563-2197-4fc5-a67c-5711d0198fc0 '!=' 043c8563-2197-4fc5-a67c-5711d0198fc0 ']' 00:11:57.232 12:04:04 -- bdev/bdev_raid.sh@511 -- # killprocess 1237641 00:11:57.232 12:04:04 -- common/autotest_common.sh@926 -- # '[' -z 1237641 ']' 00:11:57.232 12:04:04 -- common/autotest_common.sh@930 -- # kill -0 1237641 00:11:57.232 12:04:04 -- common/autotest_common.sh@931 -- # uname 00:11:57.491 12:04:04 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:57.491 12:04:04 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1237641 00:11:57.491 12:04:04 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:57.491 12:04:04 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:57.491 12:04:04 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1237641' 00:11:57.491 killing process with pid 1237641 00:11:57.491 12:04:04 -- common/autotest_common.sh@945 -- # kill 1237641 00:11:57.491 [2024-07-25 12:04:04.582633] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:57.491 [2024-07-25 12:04:04.582677] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.491 [2024-07-25 12:04:04.582717] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:57.491 [2024-07-25 12:04:04.582726] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x17862c0 name raid_bdev1, state offline 00:11:57.491 12:04:04 -- common/autotest_common.sh@950 -- # wait 1237641 00:11:57.491 [2024-07-25 12:04:04.609873] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@513 -- # return 0 00:11:57.749 00:11:57.749 real 0m13.810s 00:11:57.749 user 0m24.801s 00:11:57.749 sys 0m2.699s 00:11:57.749 12:04:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:57.749 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:57.749 ************************************ 00:11:57.749 END TEST raid_superblock_test 00:11:57.749 ************************************ 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@725 -- # for n in {2..4} 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:57.749 12:04:04 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:11:57.749 12:04:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:57.749 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:57.749 ************************************ 00:11:57.749 START TEST raid_state_function_test 00:11:57.749 ************************************ 00:11:57.749 12:04:04 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 false 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:11:57.749 12:04:04 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@226 -- # raid_pid=1239970 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1239970' 00:11:57.750 Process raid pid: 1239970 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:57.750 12:04:04 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1239970 /var/tmp/spdk-raid.sock 00:11:57.750 12:04:04 -- common/autotest_common.sh@819 -- # '[' -z 1239970 ']' 00:11:57.750 12:04:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:57.750 12:04:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:57.750 12:04:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:57.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:57.750 12:04:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:57.750 12:04:04 -- common/autotest_common.sh@10 -- # set +x 00:11:57.750 [2024-07-25 12:04:04.938267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:11:57.750 [2024-07-25 12:04:04.938339] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:57.750 [2024-07-25 12:04:05.027319] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.008 [2024-07-25 12:04:05.115603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.008 [2024-07-25 12:04:05.165047] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.008 [2024-07-25 12:04:05.165071] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:58.575 12:04:05 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:58.575 12:04:05 -- common/autotest_common.sh@852 -- # return 0 00:11:58.575 12:04:05 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:58.575 [2024-07-25 12:04:05.869419] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:58.575 [2024-07-25 12:04:05.869452] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:58.575 [2024-07-25 12:04:05.869459] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:58.575 [2024-07-25 12:04:05.869467] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:58.575 [2024-07-25 12:04:05.869473] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:58.575 [2024-07-25 12:04:05.869480] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:58.575 [2024-07-25 12:04:05.869485] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:58.575 [2024-07-25 12:04:05.869493] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:58.575 12:04:05 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.575 12:04:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:11:58.575 12:04:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:11:58.575 12:04:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.833 12:04:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.833 12:04:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:11:58.833 "name": "Existed_Raid", 00:11:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.833 "strip_size_kb": 64, 00:11:58.833 "state": "configuring", 00:11:58.833 "raid_level": "raid0", 00:11:58.833 "superblock": false, 00:11:58.833 "num_base_bdevs": 4, 00:11:58.833 "num_base_bdevs_discovered": 0, 00:11:58.833 "num_base_bdevs_operational": 4, 00:11:58.833 "base_bdevs_list": [ 00:11:58.833 { 00:11:58.833 "name": "BaseBdev1", 00:11:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.833 "is_configured": false, 00:11:58.833 "data_offset": 0, 00:11:58.833 "data_size": 0 00:11:58.833 }, 00:11:58.833 { 00:11:58.833 "name": "BaseBdev2", 00:11:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.833 "is_configured": false, 00:11:58.833 "data_offset": 0, 00:11:58.833 "data_size": 0 00:11:58.833 }, 00:11:58.833 { 00:11:58.833 "name": "BaseBdev3", 00:11:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.833 "is_configured": false, 00:11:58.833 "data_offset": 0, 00:11:58.833 "data_size": 0 00:11:58.833 }, 00:11:58.833 { 00:11:58.833 "name": "BaseBdev4", 00:11:58.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.833 "is_configured": false, 00:11:58.833 "data_offset": 0, 00:11:58.833 "data_size": 0 00:11:58.833 } 00:11:58.833 ] 00:11:58.833 }' 00:11:58.833 12:04:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:11:58.833 12:04:06 -- common/autotest_common.sh@10 -- # set +x 00:11:59.398 12:04:06 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:59.399 [2024-07-25 12:04:06.691448] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.399 [2024-07-25 12:04:06.691468] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb06d80 name Existed_Raid, state configuring 00:11:59.657 12:04:06 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:11:59.657 [2024-07-25 12:04:06.851877] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.657 [2024-07-25 12:04:06.851897] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.657 [2024-07-25 12:04:06.851903] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.657 [2024-07-25 12:04:06.851911] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.657 [2024-07-25 12:04:06.851916] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.657 [2024-07-25 12:04:06.851923] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.657 [2024-07-25 12:04:06.851928] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:59.657 [2024-07-25 12:04:06.851935] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:59.657 12:04:06 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:59.915 [2024-07-25 12:04:07.028787] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.915 BaseBdev1 00:11:59.915 12:04:07 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:11:59.915 12:04:07 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:11:59.915 12:04:07 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:11:59.915 12:04:07 -- common/autotest_common.sh@889 -- # local i 00:11:59.915 12:04:07 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:11:59.915 12:04:07 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:11:59.915 12:04:07 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:59.915 12:04:07 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.174 [ 00:12:00.174 { 00:12:00.174 "name": "BaseBdev1", 00:12:00.174 "aliases": [ 00:12:00.174 "f961a48d-db6e-4422-a81a-a9dd7dda4c78" 00:12:00.174 ], 00:12:00.174 "product_name": "Malloc disk", 00:12:00.174 "block_size": 512, 00:12:00.174 "num_blocks": 65536, 00:12:00.174 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:00.174 "assigned_rate_limits": { 00:12:00.174 "rw_ios_per_sec": 0, 00:12:00.174 "rw_mbytes_per_sec": 0, 00:12:00.174 "r_mbytes_per_sec": 0, 00:12:00.174 "w_mbytes_per_sec": 0 00:12:00.174 }, 00:12:00.174 "claimed": true, 00:12:00.174 "claim_type": "exclusive_write", 00:12:00.174 "zoned": false, 00:12:00.174 "supported_io_types": { 00:12:00.174 "read": true, 00:12:00.174 "write": true, 00:12:00.174 "unmap": true, 00:12:00.174 "write_zeroes": true, 00:12:00.174 "flush": true, 00:12:00.174 "reset": true, 00:12:00.174 "compare": false, 00:12:00.174 "compare_and_write": false, 00:12:00.174 "abort": true, 00:12:00.174 "nvme_admin": false, 00:12:00.174 "nvme_io": false 00:12:00.174 }, 00:12:00.174 "memory_domains": [ 00:12:00.174 { 00:12:00.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.174 "dma_device_type": 2 00:12:00.174 } 00:12:00.174 ], 00:12:00.174 "driver_specific": {} 00:12:00.174 } 00:12:00.174 ] 00:12:00.174 12:04:07 -- common/autotest_common.sh@895 -- # return 0 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.174 12:04:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.432 12:04:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:00.432 "name": "Existed_Raid", 00:12:00.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.432 "strip_size_kb": 64, 00:12:00.432 "state": "configuring", 00:12:00.432 "raid_level": "raid0", 00:12:00.432 "superblock": false, 00:12:00.432 "num_base_bdevs": 4, 00:12:00.432 "num_base_bdevs_discovered": 1, 00:12:00.432 "num_base_bdevs_operational": 4, 00:12:00.432 "base_bdevs_list": [ 00:12:00.432 { 00:12:00.432 "name": "BaseBdev1", 00:12:00.432 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:00.432 "is_configured": true, 00:12:00.432 "data_offset": 0, 00:12:00.432 "data_size": 65536 00:12:00.432 }, 00:12:00.432 { 00:12:00.432 "name": "BaseBdev2", 00:12:00.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.432 "is_configured": false, 00:12:00.432 "data_offset": 0, 00:12:00.432 "data_size": 0 00:12:00.432 }, 00:12:00.432 { 00:12:00.432 "name": "BaseBdev3", 00:12:00.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.432 "is_configured": false, 00:12:00.432 "data_offset": 0, 00:12:00.432 "data_size": 0 00:12:00.432 }, 00:12:00.432 { 00:12:00.432 "name": "BaseBdev4", 00:12:00.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.432 "is_configured": false, 00:12:00.432 "data_offset": 0, 00:12:00.432 "data_size": 0 00:12:00.432 } 00:12:00.432 ] 00:12:00.432 }' 00:12:00.432 12:04:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:00.432 12:04:07 -- common/autotest_common.sh@10 -- # set +x 00:12:00.998 12:04:08 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:00.998 [2024-07-25 12:04:08.195809] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:00.998 [2024-07-25 12:04:08.195846] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb07000 name Existed_Raid, state configuring 00:12:00.998 12:04:08 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:00.998 12:04:08 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:01.256 [2024-07-25 12:04:08.368277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:01.256 [2024-07-25 12:04:08.369304] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:01.256 [2024-07-25 12:04:08.369327] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:01.256 [2024-07-25 12:04:08.369333] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:01.256 [2024-07-25 12:04:08.369340] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:01.256 [2024-07-25 12:04:08.369364] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:01.256 [2024-07-25 12:04:08.369372] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:01.256 12:04:08 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:01.257 12:04:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.515 12:04:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:01.515 "name": "Existed_Raid", 00:12:01.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.515 "strip_size_kb": 64, 00:12:01.515 "state": "configuring", 00:12:01.515 "raid_level": "raid0", 00:12:01.515 "superblock": false, 00:12:01.515 "num_base_bdevs": 4, 00:12:01.515 "num_base_bdevs_discovered": 1, 00:12:01.515 "num_base_bdevs_operational": 4, 00:12:01.515 "base_bdevs_list": [ 00:12:01.515 { 00:12:01.515 "name": "BaseBdev1", 00:12:01.515 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:01.515 "is_configured": true, 00:12:01.515 "data_offset": 0, 00:12:01.515 "data_size": 65536 00:12:01.515 }, 00:12:01.515 { 00:12:01.515 "name": "BaseBdev2", 00:12:01.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.515 "is_configured": false, 00:12:01.515 "data_offset": 0, 00:12:01.515 "data_size": 0 00:12:01.515 }, 00:12:01.515 { 00:12:01.515 "name": "BaseBdev3", 00:12:01.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.515 "is_configured": false, 00:12:01.515 "data_offset": 0, 00:12:01.515 "data_size": 0 00:12:01.515 }, 00:12:01.515 { 00:12:01.515 "name": "BaseBdev4", 00:12:01.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.515 "is_configured": false, 00:12:01.515 "data_offset": 0, 00:12:01.515 "data_size": 0 00:12:01.515 } 00:12:01.515 ] 00:12:01.515 }' 00:12:01.515 12:04:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:01.515 12:04:08 -- common/autotest_common.sh@10 -- # set +x 00:12:01.773 12:04:09 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:02.031 [2024-07-25 12:04:09.193346] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.031 BaseBdev2 00:12:02.031 12:04:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:02.031 12:04:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:02.031 12:04:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:02.031 12:04:09 -- common/autotest_common.sh@889 -- # local i 00:12:02.031 12:04:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:02.031 12:04:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:02.031 12:04:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:02.290 12:04:09 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:02.290 [ 00:12:02.290 { 00:12:02.290 "name": "BaseBdev2", 00:12:02.290 "aliases": [ 00:12:02.290 "7fa9c365-b9d5-4431-86d0-689554c2ec4b" 00:12:02.290 ], 00:12:02.290 "product_name": "Malloc disk", 00:12:02.290 "block_size": 512, 00:12:02.290 "num_blocks": 65536, 00:12:02.290 "uuid": "7fa9c365-b9d5-4431-86d0-689554c2ec4b", 00:12:02.290 "assigned_rate_limits": { 00:12:02.290 "rw_ios_per_sec": 0, 00:12:02.290 "rw_mbytes_per_sec": 0, 00:12:02.290 "r_mbytes_per_sec": 0, 00:12:02.290 "w_mbytes_per_sec": 0 00:12:02.290 }, 00:12:02.290 "claimed": true, 00:12:02.290 "claim_type": "exclusive_write", 00:12:02.290 "zoned": false, 00:12:02.290 "supported_io_types": { 00:12:02.290 "read": true, 00:12:02.290 "write": true, 00:12:02.290 "unmap": true, 00:12:02.290 "write_zeroes": true, 00:12:02.290 "flush": true, 00:12:02.290 "reset": true, 00:12:02.290 "compare": false, 00:12:02.290 "compare_and_write": false, 00:12:02.290 "abort": true, 00:12:02.290 "nvme_admin": false, 00:12:02.290 "nvme_io": false 00:12:02.290 }, 00:12:02.290 "memory_domains": [ 00:12:02.290 { 00:12:02.290 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.290 "dma_device_type": 2 00:12:02.290 } 00:12:02.290 ], 00:12:02.290 "driver_specific": {} 00:12:02.290 } 00:12:02.290 ] 00:12:02.290 12:04:09 -- common/autotest_common.sh@895 -- # return 0 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:02.290 12:04:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:02.291 12:04:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:02.291 12:04:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:02.291 12:04:09 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.291 12:04:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.549 12:04:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:02.549 "name": "Existed_Raid", 00:12:02.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.549 "strip_size_kb": 64, 00:12:02.549 "state": "configuring", 00:12:02.549 "raid_level": "raid0", 00:12:02.549 "superblock": false, 00:12:02.549 "num_base_bdevs": 4, 00:12:02.549 "num_base_bdevs_discovered": 2, 00:12:02.549 "num_base_bdevs_operational": 4, 00:12:02.549 "base_bdevs_list": [ 00:12:02.549 { 00:12:02.549 "name": "BaseBdev1", 00:12:02.549 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:02.549 "is_configured": true, 00:12:02.549 "data_offset": 0, 00:12:02.549 "data_size": 65536 00:12:02.549 }, 00:12:02.549 { 00:12:02.549 "name": "BaseBdev2", 00:12:02.549 "uuid": "7fa9c365-b9d5-4431-86d0-689554c2ec4b", 00:12:02.549 "is_configured": true, 00:12:02.549 "data_offset": 0, 00:12:02.549 "data_size": 65536 00:12:02.549 }, 00:12:02.549 { 00:12:02.549 "name": "BaseBdev3", 00:12:02.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.549 "is_configured": false, 00:12:02.549 "data_offset": 0, 00:12:02.549 "data_size": 0 00:12:02.549 }, 00:12:02.549 { 00:12:02.549 "name": "BaseBdev4", 00:12:02.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.549 "is_configured": false, 00:12:02.549 "data_offset": 0, 00:12:02.549 "data_size": 0 00:12:02.549 } 00:12:02.549 ] 00:12:02.549 }' 00:12:02.549 12:04:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:02.549 12:04:09 -- common/autotest_common.sh@10 -- # set +x 00:12:03.117 12:04:10 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:03.117 [2024-07-25 12:04:10.364294] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.117 BaseBdev3 00:12:03.117 12:04:10 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:03.117 12:04:10 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:03.117 12:04:10 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:03.117 12:04:10 -- common/autotest_common.sh@889 -- # local i 00:12:03.117 12:04:10 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:03.117 12:04:10 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:03.117 12:04:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:03.379 12:04:10 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:03.379 [ 00:12:03.379 { 00:12:03.379 "name": "BaseBdev3", 00:12:03.379 "aliases": [ 00:12:03.379 "aec8596a-8cfd-4015-869a-b01e4ce97f8d" 00:12:03.379 ], 00:12:03.379 "product_name": "Malloc disk", 00:12:03.379 "block_size": 512, 00:12:03.379 "num_blocks": 65536, 00:12:03.379 "uuid": "aec8596a-8cfd-4015-869a-b01e4ce97f8d", 00:12:03.379 "assigned_rate_limits": { 00:12:03.379 "rw_ios_per_sec": 0, 00:12:03.379 "rw_mbytes_per_sec": 0, 00:12:03.379 "r_mbytes_per_sec": 0, 00:12:03.379 "w_mbytes_per_sec": 0 00:12:03.379 }, 00:12:03.379 "claimed": true, 00:12:03.379 "claim_type": "exclusive_write", 00:12:03.379 "zoned": false, 00:12:03.379 "supported_io_types": { 00:12:03.379 "read": true, 00:12:03.379 "write": true, 00:12:03.379 "unmap": true, 00:12:03.379 "write_zeroes": true, 00:12:03.379 "flush": true, 00:12:03.379 "reset": true, 00:12:03.379 "compare": false, 00:12:03.379 "compare_and_write": false, 00:12:03.379 "abort": true, 00:12:03.379 "nvme_admin": false, 00:12:03.379 "nvme_io": false 00:12:03.379 }, 00:12:03.379 "memory_domains": [ 00:12:03.379 { 00:12:03.379 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.379 "dma_device_type": 2 00:12:03.379 } 00:12:03.379 ], 00:12:03.379 "driver_specific": {} 00:12:03.379 } 00:12:03.379 ] 00:12:03.719 12:04:10 -- common/autotest_common.sh@895 -- # return 0 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:03.719 "name": "Existed_Raid", 00:12:03.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.719 "strip_size_kb": 64, 00:12:03.719 "state": "configuring", 00:12:03.719 "raid_level": "raid0", 00:12:03.719 "superblock": false, 00:12:03.719 "num_base_bdevs": 4, 00:12:03.719 "num_base_bdevs_discovered": 3, 00:12:03.719 "num_base_bdevs_operational": 4, 00:12:03.719 "base_bdevs_list": [ 00:12:03.719 { 00:12:03.719 "name": "BaseBdev1", 00:12:03.719 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:03.719 "is_configured": true, 00:12:03.719 "data_offset": 0, 00:12:03.719 "data_size": 65536 00:12:03.719 }, 00:12:03.719 { 00:12:03.719 "name": "BaseBdev2", 00:12:03.719 "uuid": "7fa9c365-b9d5-4431-86d0-689554c2ec4b", 00:12:03.719 "is_configured": true, 00:12:03.719 "data_offset": 0, 00:12:03.719 "data_size": 65536 00:12:03.719 }, 00:12:03.719 { 00:12:03.719 "name": "BaseBdev3", 00:12:03.719 "uuid": "aec8596a-8cfd-4015-869a-b01e4ce97f8d", 00:12:03.719 "is_configured": true, 00:12:03.719 "data_offset": 0, 00:12:03.719 "data_size": 65536 00:12:03.719 }, 00:12:03.719 { 00:12:03.719 "name": "BaseBdev4", 00:12:03.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.719 "is_configured": false, 00:12:03.719 "data_offset": 0, 00:12:03.719 "data_size": 0 00:12:03.719 } 00:12:03.719 ] 00:12:03.719 }' 00:12:03.719 12:04:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:03.719 12:04:10 -- common/autotest_common.sh@10 -- # set +x 00:12:04.287 12:04:11 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:04.287 [2024-07-25 12:04:11.473959] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:04.287 [2024-07-25 12:04:11.473996] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xb065f0 00:12:04.287 [2024-07-25 12:04:11.474002] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:04.287 [2024-07-25 12:04:11.474178] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xb0ae40 00:12:04.287 [2024-07-25 12:04:11.474257] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xb065f0 00:12:04.287 [2024-07-25 12:04:11.474263] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xb065f0 00:12:04.287 [2024-07-25 12:04:11.474385] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:04.287 BaseBdev4 00:12:04.287 12:04:11 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:12:04.287 12:04:11 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:12:04.287 12:04:11 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:04.287 12:04:11 -- common/autotest_common.sh@889 -- # local i 00:12:04.287 12:04:11 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:04.287 12:04:11 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:04.287 12:04:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:04.546 12:04:11 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:04.546 [ 00:12:04.546 { 00:12:04.546 "name": "BaseBdev4", 00:12:04.546 "aliases": [ 00:12:04.546 "40d785db-8fee-4960-a9a8-16741f9c334e" 00:12:04.546 ], 00:12:04.546 "product_name": "Malloc disk", 00:12:04.546 "block_size": 512, 00:12:04.546 "num_blocks": 65536, 00:12:04.546 "uuid": "40d785db-8fee-4960-a9a8-16741f9c334e", 00:12:04.546 "assigned_rate_limits": { 00:12:04.546 "rw_ios_per_sec": 0, 00:12:04.546 "rw_mbytes_per_sec": 0, 00:12:04.546 "r_mbytes_per_sec": 0, 00:12:04.546 "w_mbytes_per_sec": 0 00:12:04.546 }, 00:12:04.546 "claimed": true, 00:12:04.546 "claim_type": "exclusive_write", 00:12:04.546 "zoned": false, 00:12:04.546 "supported_io_types": { 00:12:04.546 "read": true, 00:12:04.546 "write": true, 00:12:04.546 "unmap": true, 00:12:04.546 "write_zeroes": true, 00:12:04.546 "flush": true, 00:12:04.546 "reset": true, 00:12:04.546 "compare": false, 00:12:04.546 "compare_and_write": false, 00:12:04.546 "abort": true, 00:12:04.546 "nvme_admin": false, 00:12:04.546 "nvme_io": false 00:12:04.546 }, 00:12:04.546 "memory_domains": [ 00:12:04.546 { 00:12:04.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.546 "dma_device_type": 2 00:12:04.546 } 00:12:04.546 ], 00:12:04.546 "driver_specific": {} 00:12:04.546 } 00:12:04.546 ] 00:12:04.546 12:04:11 -- common/autotest_common.sh@895 -- # return 0 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.546 12:04:11 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.805 12:04:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:04.805 "name": "Existed_Raid", 00:12:04.805 "uuid": "cd11947d-b800-4ca8-9bfe-4a4b3e5da05f", 00:12:04.805 "strip_size_kb": 64, 00:12:04.805 "state": "online", 00:12:04.805 "raid_level": "raid0", 00:12:04.805 "superblock": false, 00:12:04.805 "num_base_bdevs": 4, 00:12:04.805 "num_base_bdevs_discovered": 4, 00:12:04.805 "num_base_bdevs_operational": 4, 00:12:04.805 "base_bdevs_list": [ 00:12:04.805 { 00:12:04.805 "name": "BaseBdev1", 00:12:04.805 "uuid": "f961a48d-db6e-4422-a81a-a9dd7dda4c78", 00:12:04.805 "is_configured": true, 00:12:04.805 "data_offset": 0, 00:12:04.805 "data_size": 65536 00:12:04.805 }, 00:12:04.805 { 00:12:04.805 "name": "BaseBdev2", 00:12:04.805 "uuid": "7fa9c365-b9d5-4431-86d0-689554c2ec4b", 00:12:04.805 "is_configured": true, 00:12:04.805 "data_offset": 0, 00:12:04.805 "data_size": 65536 00:12:04.805 }, 00:12:04.805 { 00:12:04.805 "name": "BaseBdev3", 00:12:04.805 "uuid": "aec8596a-8cfd-4015-869a-b01e4ce97f8d", 00:12:04.805 "is_configured": true, 00:12:04.805 "data_offset": 0, 00:12:04.805 "data_size": 65536 00:12:04.805 }, 00:12:04.805 { 00:12:04.805 "name": "BaseBdev4", 00:12:04.805 "uuid": "40d785db-8fee-4960-a9a8-16741f9c334e", 00:12:04.805 "is_configured": true, 00:12:04.805 "data_offset": 0, 00:12:04.805 "data_size": 65536 00:12:04.805 } 00:12:04.805 ] 00:12:04.805 }' 00:12:04.805 12:04:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:04.805 12:04:12 -- common/autotest_common.sh@10 -- # set +x 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:05.372 [2024-07-25 12:04:12.653045] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:05.372 [2024-07-25 12:04:12.653067] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.372 [2024-07-25 12:04:12.653103] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.372 12:04:12 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.630 12:04:12 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:05.630 "name": "Existed_Raid", 00:12:05.630 "uuid": "cd11947d-b800-4ca8-9bfe-4a4b3e5da05f", 00:12:05.630 "strip_size_kb": 64, 00:12:05.630 "state": "offline", 00:12:05.630 "raid_level": "raid0", 00:12:05.630 "superblock": false, 00:12:05.630 "num_base_bdevs": 4, 00:12:05.630 "num_base_bdevs_discovered": 3, 00:12:05.630 "num_base_bdevs_operational": 3, 00:12:05.630 "base_bdevs_list": [ 00:12:05.630 { 00:12:05.630 "name": null, 00:12:05.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:05.630 "is_configured": false, 00:12:05.630 "data_offset": 0, 00:12:05.630 "data_size": 65536 00:12:05.630 }, 00:12:05.630 { 00:12:05.630 "name": "BaseBdev2", 00:12:05.630 "uuid": "7fa9c365-b9d5-4431-86d0-689554c2ec4b", 00:12:05.630 "is_configured": true, 00:12:05.630 "data_offset": 0, 00:12:05.630 "data_size": 65536 00:12:05.630 }, 00:12:05.630 { 00:12:05.630 "name": "BaseBdev3", 00:12:05.630 "uuid": "aec8596a-8cfd-4015-869a-b01e4ce97f8d", 00:12:05.630 "is_configured": true, 00:12:05.630 "data_offset": 0, 00:12:05.630 "data_size": 65536 00:12:05.630 }, 00:12:05.630 { 00:12:05.630 "name": "BaseBdev4", 00:12:05.630 "uuid": "40d785db-8fee-4960-a9a8-16741f9c334e", 00:12:05.630 "is_configured": true, 00:12:05.630 "data_offset": 0, 00:12:05.630 "data_size": 65536 00:12:05.630 } 00:12:05.630 ] 00:12:05.630 }' 00:12:05.630 12:04:12 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:05.630 12:04:12 -- common/autotest_common.sh@10 -- # set +x 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.212 12:04:13 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:06.476 [2024-07-25 12:04:13.656510] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:06.476 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:06.476 12:04:13 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:06.476 12:04:13 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.476 12:04:13 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:06.734 12:04:13 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:06.734 12:04:13 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.734 12:04:13 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:06.734 [2024-07-25 12:04:14.003463] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:06.734 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:06.734 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:06.734 12:04:14 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:06.734 12:04:14 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:06.992 12:04:14 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:06.992 12:04:14 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:06.992 12:04:14 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:07.249 [2024-07-25 12:04:14.355839] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:07.249 [2024-07-25 12:04:14.355872] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xb065f0 name Existed_Raid, state offline 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:07.249 12:04:14 -- bdev/bdev_raid.sh@287 -- # killprocess 1239970 00:12:07.249 12:04:14 -- common/autotest_common.sh@926 -- # '[' -z 1239970 ']' 00:12:07.249 12:04:14 -- common/autotest_common.sh@930 -- # kill -0 1239970 00:12:07.249 12:04:14 -- common/autotest_common.sh@931 -- # uname 00:12:07.539 12:04:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:07.539 12:04:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1239970 00:12:07.539 12:04:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:07.539 12:04:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:07.539 12:04:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1239970' 00:12:07.539 killing process with pid 1239970 00:12:07.539 12:04:14 -- common/autotest_common.sh@945 -- # kill 1239970 00:12:07.539 [2024-07-25 12:04:14.603993] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:07.539 12:04:14 -- common/autotest_common.sh@950 -- # wait 1239970 00:12:07.539 [2024-07-25 12:04:14.604881] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:07.539 12:04:14 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:07.539 00:12:07.539 real 0m9.950s 00:12:07.539 user 0m17.542s 00:12:07.539 sys 0m1.953s 00:12:07.539 12:04:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:07.539 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:12:07.539 ************************************ 00:12:07.539 END TEST raid_state_function_test 00:12:07.539 ************************************ 00:12:07.796 12:04:14 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:07.797 12:04:14 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:07.797 12:04:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:07.797 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:12:07.797 ************************************ 00:12:07.797 START TEST raid_state_function_test_sb 00:12:07.797 ************************************ 00:12:07.797 12:04:14 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid0 4 true 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid0 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@212 -- # '[' raid0 '!=' raid1 ']' 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@226 -- # raid_pid=1241509 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1241509' 00:12:07.797 Process raid pid: 1241509 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:07.797 12:04:14 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1241509 /var/tmp/spdk-raid.sock 00:12:07.797 12:04:14 -- common/autotest_common.sh@819 -- # '[' -z 1241509 ']' 00:12:07.797 12:04:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:07.797 12:04:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.797 12:04:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:07.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:07.797 12:04:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.797 12:04:14 -- common/autotest_common.sh@10 -- # set +x 00:12:07.797 [2024-07-25 12:04:14.945691] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:07.797 [2024-07-25 12:04:14.945749] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:07.797 [2024-07-25 12:04:15.031609] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.055 [2024-07-25 12:04:15.120548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.055 [2024-07-25 12:04:15.175646] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.055 [2024-07-25 12:04:15.175669] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.620 12:04:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.620 12:04:15 -- common/autotest_common.sh@852 -- # return 0 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:08.621 [2024-07-25 12:04:15.891188] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.621 [2024-07-25 12:04:15.891222] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.621 [2024-07-25 12:04:15.891229] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.621 [2024-07-25 12:04:15.891237] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.621 [2024-07-25 12:04:15.891242] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.621 [2024-07-25 12:04:15.891249] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.621 [2024-07-25 12:04:15.891254] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.621 [2024-07-25 12:04:15.891262] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.621 12:04:15 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.879 12:04:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:08.879 "name": "Existed_Raid", 00:12:08.879 "uuid": "0e7ca272-e1a4-4f64-871f-65c739bd80e7", 00:12:08.879 "strip_size_kb": 64, 00:12:08.879 "state": "configuring", 00:12:08.879 "raid_level": "raid0", 00:12:08.879 "superblock": true, 00:12:08.879 "num_base_bdevs": 4, 00:12:08.879 "num_base_bdevs_discovered": 0, 00:12:08.879 "num_base_bdevs_operational": 4, 00:12:08.879 "base_bdevs_list": [ 00:12:08.879 { 00:12:08.879 "name": "BaseBdev1", 00:12:08.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.879 "is_configured": false, 00:12:08.879 "data_offset": 0, 00:12:08.879 "data_size": 0 00:12:08.879 }, 00:12:08.879 { 00:12:08.879 "name": "BaseBdev2", 00:12:08.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.879 "is_configured": false, 00:12:08.879 "data_offset": 0, 00:12:08.879 "data_size": 0 00:12:08.879 }, 00:12:08.879 { 00:12:08.879 "name": "BaseBdev3", 00:12:08.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.879 "is_configured": false, 00:12:08.879 "data_offset": 0, 00:12:08.879 "data_size": 0 00:12:08.879 }, 00:12:08.879 { 00:12:08.879 "name": "BaseBdev4", 00:12:08.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.879 "is_configured": false, 00:12:08.879 "data_offset": 0, 00:12:08.879 "data_size": 0 00:12:08.879 } 00:12:08.879 ] 00:12:08.879 }' 00:12:08.879 12:04:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:08.879 12:04:16 -- common/autotest_common.sh@10 -- # set +x 00:12:09.444 12:04:16 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:09.444 [2024-07-25 12:04:16.721236] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.444 [2024-07-25 12:04:16.721256] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8bdd80 name Existed_Raid, state configuring 00:12:09.444 12:04:16 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:09.702 [2024-07-25 12:04:16.897701] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.702 [2024-07-25 12:04:16.897721] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.702 [2024-07-25 12:04:16.897726] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.702 [2024-07-25 12:04:16.897734] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.702 [2024-07-25 12:04:16.897739] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.702 [2024-07-25 12:04:16.897746] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.702 [2024-07-25 12:04:16.897750] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.702 [2024-07-25 12:04:16.897757] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.702 12:04:16 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.959 [2024-07-25 12:04:17.078754] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.959 BaseBdev1 00:12:09.959 12:04:17 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:09.959 12:04:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:09.959 12:04:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:09.959 12:04:17 -- common/autotest_common.sh@889 -- # local i 00:12:09.959 12:04:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:09.959 12:04:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:09.959 12:04:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:09.959 12:04:17 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.217 [ 00:12:10.217 { 00:12:10.217 "name": "BaseBdev1", 00:12:10.217 "aliases": [ 00:12:10.217 "4fc38fbe-4eb1-4510-81cd-118e32966855" 00:12:10.217 ], 00:12:10.217 "product_name": "Malloc disk", 00:12:10.217 "block_size": 512, 00:12:10.217 "num_blocks": 65536, 00:12:10.217 "uuid": "4fc38fbe-4eb1-4510-81cd-118e32966855", 00:12:10.217 "assigned_rate_limits": { 00:12:10.217 "rw_ios_per_sec": 0, 00:12:10.217 "rw_mbytes_per_sec": 0, 00:12:10.217 "r_mbytes_per_sec": 0, 00:12:10.217 "w_mbytes_per_sec": 0 00:12:10.217 }, 00:12:10.217 "claimed": true, 00:12:10.217 "claim_type": "exclusive_write", 00:12:10.217 "zoned": false, 00:12:10.217 "supported_io_types": { 00:12:10.217 "read": true, 00:12:10.217 "write": true, 00:12:10.217 "unmap": true, 00:12:10.217 "write_zeroes": true, 00:12:10.217 "flush": true, 00:12:10.217 "reset": true, 00:12:10.217 "compare": false, 00:12:10.217 "compare_and_write": false, 00:12:10.217 "abort": true, 00:12:10.217 "nvme_admin": false, 00:12:10.217 "nvme_io": false 00:12:10.217 }, 00:12:10.217 "memory_domains": [ 00:12:10.217 { 00:12:10.217 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.217 "dma_device_type": 2 00:12:10.217 } 00:12:10.217 ], 00:12:10.217 "driver_specific": {} 00:12:10.217 } 00:12:10.217 ] 00:12:10.217 12:04:17 -- common/autotest_common.sh@895 -- # return 0 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.217 12:04:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.475 12:04:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:10.475 "name": "Existed_Raid", 00:12:10.475 "uuid": "4a334691-95df-4d7d-bdb1-1caeefa3eae8", 00:12:10.475 "strip_size_kb": 64, 00:12:10.475 "state": "configuring", 00:12:10.475 "raid_level": "raid0", 00:12:10.475 "superblock": true, 00:12:10.475 "num_base_bdevs": 4, 00:12:10.475 "num_base_bdevs_discovered": 1, 00:12:10.475 "num_base_bdevs_operational": 4, 00:12:10.475 "base_bdevs_list": [ 00:12:10.475 { 00:12:10.475 "name": "BaseBdev1", 00:12:10.475 "uuid": "4fc38fbe-4eb1-4510-81cd-118e32966855", 00:12:10.475 "is_configured": true, 00:12:10.475 "data_offset": 2048, 00:12:10.475 "data_size": 63488 00:12:10.475 }, 00:12:10.475 { 00:12:10.475 "name": "BaseBdev2", 00:12:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.475 "is_configured": false, 00:12:10.475 "data_offset": 0, 00:12:10.475 "data_size": 0 00:12:10.475 }, 00:12:10.475 { 00:12:10.475 "name": "BaseBdev3", 00:12:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.475 "is_configured": false, 00:12:10.475 "data_offset": 0, 00:12:10.475 "data_size": 0 00:12:10.475 }, 00:12:10.475 { 00:12:10.475 "name": "BaseBdev4", 00:12:10.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.475 "is_configured": false, 00:12:10.475 "data_offset": 0, 00:12:10.475 "data_size": 0 00:12:10.475 } 00:12:10.475 ] 00:12:10.475 }' 00:12:10.475 12:04:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:10.475 12:04:17 -- common/autotest_common.sh@10 -- # set +x 00:12:11.041 12:04:18 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:11.041 [2024-07-25 12:04:18.237754] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:11.041 [2024-07-25 12:04:18.237796] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x8be000 name Existed_Raid, state configuring 00:12:11.041 12:04:18 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:11.041 12:04:18 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:11.299 12:04:18 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:11.299 BaseBdev1 00:12:11.299 12:04:18 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:11.299 12:04:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:11.299 12:04:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:11.299 12:04:18 -- common/autotest_common.sh@889 -- # local i 00:12:11.299 12:04:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:11.299 12:04:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:11.299 12:04:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:11.556 12:04:18 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:11.815 [ 00:12:11.815 { 00:12:11.815 "name": "BaseBdev1", 00:12:11.815 "aliases": [ 00:12:11.815 "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24" 00:12:11.815 ], 00:12:11.815 "product_name": "Malloc disk", 00:12:11.815 "block_size": 512, 00:12:11.815 "num_blocks": 65536, 00:12:11.815 "uuid": "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24", 00:12:11.815 "assigned_rate_limits": { 00:12:11.815 "rw_ios_per_sec": 0, 00:12:11.815 "rw_mbytes_per_sec": 0, 00:12:11.815 "r_mbytes_per_sec": 0, 00:12:11.815 "w_mbytes_per_sec": 0 00:12:11.815 }, 00:12:11.815 "claimed": false, 00:12:11.815 "zoned": false, 00:12:11.815 "supported_io_types": { 00:12:11.815 "read": true, 00:12:11.815 "write": true, 00:12:11.815 "unmap": true, 00:12:11.815 "write_zeroes": true, 00:12:11.815 "flush": true, 00:12:11.815 "reset": true, 00:12:11.815 "compare": false, 00:12:11.815 "compare_and_write": false, 00:12:11.815 "abort": true, 00:12:11.815 "nvme_admin": false, 00:12:11.815 "nvme_io": false 00:12:11.815 }, 00:12:11.815 "memory_domains": [ 00:12:11.815 { 00:12:11.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.815 "dma_device_type": 2 00:12:11.815 } 00:12:11.815 ], 00:12:11.815 "driver_specific": {} 00:12:11.815 } 00:12:11.815 ] 00:12:11.815 12:04:18 -- common/autotest_common.sh@895 -- # return 0 00:12:11.815 12:04:18 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:11.815 [2024-07-25 12:04:19.033382] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:11.815 [2024-07-25 12:04:19.034498] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:11.815 [2024-07-25 12:04:19.034527] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:11.815 [2024-07-25 12:04:19.034533] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:11.815 [2024-07-25 12:04:19.034541] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:11.815 [2024-07-25 12:04:19.034546] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:11.815 [2024-07-25 12:04:19.034553] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.815 12:04:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.073 12:04:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:12.073 "name": "Existed_Raid", 00:12:12.073 "uuid": "4c8e6f18-2d30-406b-b045-b9eb93045d25", 00:12:12.073 "strip_size_kb": 64, 00:12:12.073 "state": "configuring", 00:12:12.073 "raid_level": "raid0", 00:12:12.073 "superblock": true, 00:12:12.073 "num_base_bdevs": 4, 00:12:12.073 "num_base_bdevs_discovered": 1, 00:12:12.073 "num_base_bdevs_operational": 4, 00:12:12.073 "base_bdevs_list": [ 00:12:12.073 { 00:12:12.073 "name": "BaseBdev1", 00:12:12.073 "uuid": "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24", 00:12:12.073 "is_configured": true, 00:12:12.073 "data_offset": 2048, 00:12:12.073 "data_size": 63488 00:12:12.073 }, 00:12:12.073 { 00:12:12.073 "name": "BaseBdev2", 00:12:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.073 "is_configured": false, 00:12:12.073 "data_offset": 0, 00:12:12.073 "data_size": 0 00:12:12.073 }, 00:12:12.073 { 00:12:12.073 "name": "BaseBdev3", 00:12:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.073 "is_configured": false, 00:12:12.073 "data_offset": 0, 00:12:12.073 "data_size": 0 00:12:12.073 }, 00:12:12.073 { 00:12:12.073 "name": "BaseBdev4", 00:12:12.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:12.073 "is_configured": false, 00:12:12.073 "data_offset": 0, 00:12:12.073 "data_size": 0 00:12:12.073 } 00:12:12.073 ] 00:12:12.073 }' 00:12:12.073 12:04:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:12.073 12:04:19 -- common/autotest_common.sh@10 -- # set +x 00:12:12.639 12:04:19 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:12.639 [2024-07-25 12:04:19.855463] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.639 BaseBdev2 00:12:12.639 12:04:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:12.639 12:04:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:12.639 12:04:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:12.639 12:04:19 -- common/autotest_common.sh@889 -- # local i 00:12:12.640 12:04:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:12.640 12:04:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:12.640 12:04:19 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:12.898 12:04:20 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:12.898 [ 00:12:12.898 { 00:12:12.898 "name": "BaseBdev2", 00:12:12.898 "aliases": [ 00:12:12.898 "710ca994-6089-43d1-9dd3-d248070e5330" 00:12:12.898 ], 00:12:12.898 "product_name": "Malloc disk", 00:12:12.898 "block_size": 512, 00:12:12.898 "num_blocks": 65536, 00:12:12.898 "uuid": "710ca994-6089-43d1-9dd3-d248070e5330", 00:12:12.898 "assigned_rate_limits": { 00:12:12.898 "rw_ios_per_sec": 0, 00:12:12.898 "rw_mbytes_per_sec": 0, 00:12:12.898 "r_mbytes_per_sec": 0, 00:12:12.898 "w_mbytes_per_sec": 0 00:12:12.898 }, 00:12:12.898 "claimed": true, 00:12:12.898 "claim_type": "exclusive_write", 00:12:12.898 "zoned": false, 00:12:12.898 "supported_io_types": { 00:12:12.898 "read": true, 00:12:12.898 "write": true, 00:12:12.898 "unmap": true, 00:12:12.898 "write_zeroes": true, 00:12:12.898 "flush": true, 00:12:12.898 "reset": true, 00:12:12.898 "compare": false, 00:12:12.898 "compare_and_write": false, 00:12:12.898 "abort": true, 00:12:12.898 "nvme_admin": false, 00:12:12.898 "nvme_io": false 00:12:12.898 }, 00:12:12.898 "memory_domains": [ 00:12:12.898 { 00:12:12.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.898 "dma_device_type": 2 00:12:12.898 } 00:12:12.898 ], 00:12:12.898 "driver_specific": {} 00:12:12.898 } 00:12:12.898 ] 00:12:13.156 12:04:20 -- common/autotest_common.sh@895 -- # return 0 00:12:13.156 12:04:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:13.157 "name": "Existed_Raid", 00:12:13.157 "uuid": "4c8e6f18-2d30-406b-b045-b9eb93045d25", 00:12:13.157 "strip_size_kb": 64, 00:12:13.157 "state": "configuring", 00:12:13.157 "raid_level": "raid0", 00:12:13.157 "superblock": true, 00:12:13.157 "num_base_bdevs": 4, 00:12:13.157 "num_base_bdevs_discovered": 2, 00:12:13.157 "num_base_bdevs_operational": 4, 00:12:13.157 "base_bdevs_list": [ 00:12:13.157 { 00:12:13.157 "name": "BaseBdev1", 00:12:13.157 "uuid": "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24", 00:12:13.157 "is_configured": true, 00:12:13.157 "data_offset": 2048, 00:12:13.157 "data_size": 63488 00:12:13.157 }, 00:12:13.157 { 00:12:13.157 "name": "BaseBdev2", 00:12:13.157 "uuid": "710ca994-6089-43d1-9dd3-d248070e5330", 00:12:13.157 "is_configured": true, 00:12:13.157 "data_offset": 2048, 00:12:13.157 "data_size": 63488 00:12:13.157 }, 00:12:13.157 { 00:12:13.157 "name": "BaseBdev3", 00:12:13.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.157 "is_configured": false, 00:12:13.157 "data_offset": 0, 00:12:13.157 "data_size": 0 00:12:13.157 }, 00:12:13.157 { 00:12:13.157 "name": "BaseBdev4", 00:12:13.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.157 "is_configured": false, 00:12:13.157 "data_offset": 0, 00:12:13.157 "data_size": 0 00:12:13.157 } 00:12:13.157 ] 00:12:13.157 }' 00:12:13.157 12:04:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:13.157 12:04:20 -- common/autotest_common.sh@10 -- # set +x 00:12:13.723 12:04:20 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:13.981 [2024-07-25 12:04:21.041601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:13.981 BaseBdev3 00:12:13.981 12:04:21 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:13.981 12:04:21 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:13.981 12:04:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:13.981 12:04:21 -- common/autotest_common.sh@889 -- # local i 00:12:13.981 12:04:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:13.981 12:04:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:13.981 12:04:21 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:13.981 12:04:21 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.239 [ 00:12:14.239 { 00:12:14.239 "name": "BaseBdev3", 00:12:14.239 "aliases": [ 00:12:14.239 "3b558101-0af8-474f-9c98-a0f28f093750" 00:12:14.239 ], 00:12:14.239 "product_name": "Malloc disk", 00:12:14.239 "block_size": 512, 00:12:14.239 "num_blocks": 65536, 00:12:14.239 "uuid": "3b558101-0af8-474f-9c98-a0f28f093750", 00:12:14.239 "assigned_rate_limits": { 00:12:14.239 "rw_ios_per_sec": 0, 00:12:14.239 "rw_mbytes_per_sec": 0, 00:12:14.239 "r_mbytes_per_sec": 0, 00:12:14.239 "w_mbytes_per_sec": 0 00:12:14.239 }, 00:12:14.239 "claimed": true, 00:12:14.239 "claim_type": "exclusive_write", 00:12:14.239 "zoned": false, 00:12:14.239 "supported_io_types": { 00:12:14.239 "read": true, 00:12:14.239 "write": true, 00:12:14.239 "unmap": true, 00:12:14.239 "write_zeroes": true, 00:12:14.239 "flush": true, 00:12:14.239 "reset": true, 00:12:14.239 "compare": false, 00:12:14.239 "compare_and_write": false, 00:12:14.239 "abort": true, 00:12:14.239 "nvme_admin": false, 00:12:14.239 "nvme_io": false 00:12:14.239 }, 00:12:14.239 "memory_domains": [ 00:12:14.239 { 00:12:14.239 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.239 "dma_device_type": 2 00:12:14.239 } 00:12:14.239 ], 00:12:14.239 "driver_specific": {} 00:12:14.239 } 00:12:14.239 ] 00:12:14.239 12:04:21 -- common/autotest_common.sh@895 -- # return 0 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.239 12:04:21 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.498 12:04:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:14.498 "name": "Existed_Raid", 00:12:14.498 "uuid": "4c8e6f18-2d30-406b-b045-b9eb93045d25", 00:12:14.498 "strip_size_kb": 64, 00:12:14.498 "state": "configuring", 00:12:14.498 "raid_level": "raid0", 00:12:14.498 "superblock": true, 00:12:14.498 "num_base_bdevs": 4, 00:12:14.498 "num_base_bdevs_discovered": 3, 00:12:14.498 "num_base_bdevs_operational": 4, 00:12:14.498 "base_bdevs_list": [ 00:12:14.498 { 00:12:14.498 "name": "BaseBdev1", 00:12:14.498 "uuid": "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24", 00:12:14.498 "is_configured": true, 00:12:14.498 "data_offset": 2048, 00:12:14.498 "data_size": 63488 00:12:14.498 }, 00:12:14.498 { 00:12:14.498 "name": "BaseBdev2", 00:12:14.499 "uuid": "710ca994-6089-43d1-9dd3-d248070e5330", 00:12:14.499 "is_configured": true, 00:12:14.499 "data_offset": 2048, 00:12:14.499 "data_size": 63488 00:12:14.499 }, 00:12:14.499 { 00:12:14.499 "name": "BaseBdev3", 00:12:14.499 "uuid": "3b558101-0af8-474f-9c98-a0f28f093750", 00:12:14.499 "is_configured": true, 00:12:14.499 "data_offset": 2048, 00:12:14.499 "data_size": 63488 00:12:14.499 }, 00:12:14.499 { 00:12:14.499 "name": "BaseBdev4", 00:12:14.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.499 "is_configured": false, 00:12:14.499 "data_offset": 0, 00:12:14.499 "data_size": 0 00:12:14.499 } 00:12:14.499 ] 00:12:14.499 }' 00:12:14.499 12:04:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:14.499 12:04:21 -- common/autotest_common.sh@10 -- # set +x 00:12:14.756 12:04:22 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:15.015 [2024-07-25 12:04:22.187464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:15.015 [2024-07-25 12:04:22.187604] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xa5e130 00:12:15.015 [2024-07-25 12:04:22.187615] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:15.015 [2024-07-25 12:04:22.187736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x8b6990 00:12:15.015 [2024-07-25 12:04:22.187818] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa5e130 00:12:15.015 [2024-07-25 12:04:22.187824] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa5e130 00:12:15.015 [2024-07-25 12:04:22.187886] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.015 BaseBdev4 00:12:15.015 12:04:22 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:12:15.015 12:04:22 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:12:15.015 12:04:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:15.015 12:04:22 -- common/autotest_common.sh@889 -- # local i 00:12:15.015 12:04:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:15.015 12:04:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:15.015 12:04:22 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:15.274 12:04:22 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:15.274 [ 00:12:15.274 { 00:12:15.274 "name": "BaseBdev4", 00:12:15.274 "aliases": [ 00:12:15.274 "189bf972-0bc5-4699-aacf-eb278fe1a962" 00:12:15.274 ], 00:12:15.274 "product_name": "Malloc disk", 00:12:15.274 "block_size": 512, 00:12:15.274 "num_blocks": 65536, 00:12:15.274 "uuid": "189bf972-0bc5-4699-aacf-eb278fe1a962", 00:12:15.274 "assigned_rate_limits": { 00:12:15.274 "rw_ios_per_sec": 0, 00:12:15.274 "rw_mbytes_per_sec": 0, 00:12:15.274 "r_mbytes_per_sec": 0, 00:12:15.274 "w_mbytes_per_sec": 0 00:12:15.274 }, 00:12:15.274 "claimed": true, 00:12:15.274 "claim_type": "exclusive_write", 00:12:15.274 "zoned": false, 00:12:15.274 "supported_io_types": { 00:12:15.274 "read": true, 00:12:15.274 "write": true, 00:12:15.274 "unmap": true, 00:12:15.274 "write_zeroes": true, 00:12:15.274 "flush": true, 00:12:15.274 "reset": true, 00:12:15.274 "compare": false, 00:12:15.274 "compare_and_write": false, 00:12:15.274 "abort": true, 00:12:15.274 "nvme_admin": false, 00:12:15.274 "nvme_io": false 00:12:15.274 }, 00:12:15.274 "memory_domains": [ 00:12:15.274 { 00:12:15.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.274 "dma_device_type": 2 00:12:15.274 } 00:12:15.274 ], 00:12:15.274 "driver_specific": {} 00:12:15.274 } 00:12:15.274 ] 00:12:15.274 12:04:22 -- common/autotest_common.sh@895 -- # return 0 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.274 12:04:22 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.534 12:04:22 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:15.534 "name": "Existed_Raid", 00:12:15.534 "uuid": "4c8e6f18-2d30-406b-b045-b9eb93045d25", 00:12:15.534 "strip_size_kb": 64, 00:12:15.534 "state": "online", 00:12:15.534 "raid_level": "raid0", 00:12:15.534 "superblock": true, 00:12:15.534 "num_base_bdevs": 4, 00:12:15.534 "num_base_bdevs_discovered": 4, 00:12:15.534 "num_base_bdevs_operational": 4, 00:12:15.534 "base_bdevs_list": [ 00:12:15.534 { 00:12:15.534 "name": "BaseBdev1", 00:12:15.534 "uuid": "f402e29b-7473-45e5-8dcd-c2b4a6bb9c24", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 }, 00:12:15.534 { 00:12:15.534 "name": "BaseBdev2", 00:12:15.534 "uuid": "710ca994-6089-43d1-9dd3-d248070e5330", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 }, 00:12:15.534 { 00:12:15.534 "name": "BaseBdev3", 00:12:15.534 "uuid": "3b558101-0af8-474f-9c98-a0f28f093750", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 }, 00:12:15.534 { 00:12:15.534 "name": "BaseBdev4", 00:12:15.534 "uuid": "189bf972-0bc5-4699-aacf-eb278fe1a962", 00:12:15.534 "is_configured": true, 00:12:15.534 "data_offset": 2048, 00:12:15.534 "data_size": 63488 00:12:15.534 } 00:12:15.534 ] 00:12:15.534 }' 00:12:15.534 12:04:22 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:15.534 12:04:22 -- common/autotest_common.sh@10 -- # set +x 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:16.102 [2024-07-25 12:04:23.354517] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:16.102 [2024-07-25 12:04:23.354540] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:16.102 [2024-07-25 12:04:23.354567] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid0 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.102 12:04:23 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.361 12:04:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:16.361 "name": "Existed_Raid", 00:12:16.361 "uuid": "4c8e6f18-2d30-406b-b045-b9eb93045d25", 00:12:16.361 "strip_size_kb": 64, 00:12:16.361 "state": "offline", 00:12:16.361 "raid_level": "raid0", 00:12:16.361 "superblock": true, 00:12:16.361 "num_base_bdevs": 4, 00:12:16.361 "num_base_bdevs_discovered": 3, 00:12:16.361 "num_base_bdevs_operational": 3, 00:12:16.361 "base_bdevs_list": [ 00:12:16.361 { 00:12:16.361 "name": null, 00:12:16.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:16.361 "is_configured": false, 00:12:16.361 "data_offset": 2048, 00:12:16.361 "data_size": 63488 00:12:16.361 }, 00:12:16.361 { 00:12:16.361 "name": "BaseBdev2", 00:12:16.361 "uuid": "710ca994-6089-43d1-9dd3-d248070e5330", 00:12:16.361 "is_configured": true, 00:12:16.361 "data_offset": 2048, 00:12:16.361 "data_size": 63488 00:12:16.361 }, 00:12:16.361 { 00:12:16.361 "name": "BaseBdev3", 00:12:16.361 "uuid": "3b558101-0af8-474f-9c98-a0f28f093750", 00:12:16.361 "is_configured": true, 00:12:16.361 "data_offset": 2048, 00:12:16.361 "data_size": 63488 00:12:16.361 }, 00:12:16.361 { 00:12:16.361 "name": "BaseBdev4", 00:12:16.361 "uuid": "189bf972-0bc5-4699-aacf-eb278fe1a962", 00:12:16.361 "is_configured": true, 00:12:16.361 "data_offset": 2048, 00:12:16.361 "data_size": 63488 00:12:16.361 } 00:12:16.361 ] 00:12:16.361 }' 00:12:16.361 12:04:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:16.361 12:04:23 -- common/autotest_common.sh@10 -- # set +x 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:16.928 12:04:24 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:17.186 [2024-07-25 12:04:24.365993] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:17.186 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:17.186 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:17.186 12:04:24 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.186 12:04:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:17.445 [2024-07-25 12:04:24.718658] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:17.445 12:04:24 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:17.766 12:04:24 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:17.767 12:04:24 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:17.767 12:04:24 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:18.025 [2024-07-25 12:04:25.098877] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:18.025 [2024-07-25 12:04:25.098915] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa5e130 name Existed_Raid, state offline 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:18.025 12:04:25 -- bdev/bdev_raid.sh@287 -- # killprocess 1241509 00:12:18.025 12:04:25 -- common/autotest_common.sh@926 -- # '[' -z 1241509 ']' 00:12:18.025 12:04:25 -- common/autotest_common.sh@930 -- # kill -0 1241509 00:12:18.025 12:04:25 -- common/autotest_common.sh@931 -- # uname 00:12:18.025 12:04:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:18.025 12:04:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1241509 00:12:18.025 12:04:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:18.025 12:04:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:18.025 12:04:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1241509' 00:12:18.025 killing process with pid 1241509 00:12:18.025 12:04:25 -- common/autotest_common.sh@945 -- # kill 1241509 00:12:18.025 [2024-07-25 12:04:25.328199] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.025 12:04:25 -- common/autotest_common.sh@950 -- # wait 1241509 00:12:18.025 [2024-07-25 12:04:25.329004] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.283 12:04:25 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:18.283 00:12:18.283 real 0m10.663s 00:12:18.283 user 0m18.780s 00:12:18.283 sys 0m2.123s 00:12:18.283 12:04:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.283 12:04:25 -- common/autotest_common.sh@10 -- # set +x 00:12:18.283 ************************************ 00:12:18.283 END TEST raid_state_function_test_sb 00:12:18.283 ************************************ 00:12:18.283 12:04:25 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:18.283 12:04:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:18.283 12:04:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:18.283 12:04:25 -- common/autotest_common.sh@10 -- # set +x 00:12:18.283 ************************************ 00:12:18.283 START TEST raid_superblock_test 00:12:18.283 ************************************ 00:12:18.283 12:04:25 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid0 4 00:12:18.283 12:04:25 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid0 00:12:18.283 12:04:25 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@349 -- # '[' raid0 '!=' raid1 ']' 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@357 -- # raid_pid=1243203 00:12:18.542 12:04:25 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1243203 /var/tmp/spdk-raid.sock 00:12:18.542 12:04:25 -- common/autotest_common.sh@819 -- # '[' -z 1243203 ']' 00:12:18.542 12:04:25 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:18.542 12:04:25 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:18.542 12:04:25 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:18.542 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:18.542 12:04:25 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:18.542 12:04:25 -- common/autotest_common.sh@10 -- # set +x 00:12:18.542 [2024-07-25 12:04:25.635177] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:18.542 [2024-07-25 12:04:25.635226] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243203 ] 00:12:18.542 [2024-07-25 12:04:25.724143] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.542 [2024-07-25 12:04:25.813504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.801 [2024-07-25 12:04:25.870317] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:18.801 [2024-07-25 12:04:25.870343] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.369 12:04:26 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:19.369 12:04:26 -- common/autotest_common.sh@852 -- # return 0 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:19.369 malloc1 00:12:19.369 12:04:26 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:19.627 [2024-07-25 12:04:26.771379] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:19.627 [2024-07-25 12:04:26.771427] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.627 [2024-07-25 12:04:26.771443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x129c8d0 00:12:19.627 [2024-07-25 12:04:26.771451] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.627 [2024-07-25 12:04:26.772642] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.627 [2024-07-25 12:04:26.772663] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:19.627 pt1 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.627 12:04:26 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.628 12:04:26 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:19.886 malloc2 00:12:19.887 12:04:26 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:19.887 [2024-07-25 12:04:27.128281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:19.887 [2024-07-25 12:04:27.128318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:19.887 [2024-07-25 12:04:27.128332] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14441a0 00:12:19.887 [2024-07-25 12:04:27.128341] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:19.887 [2024-07-25 12:04:27.129488] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:19.887 [2024-07-25 12:04:27.129507] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:19.887 pt2 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.887 12:04:27 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:20.145 malloc3 00:12:20.145 12:04:27 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:20.403 [2024-07-25 12:04:27.462080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:20.403 [2024-07-25 12:04:27.462115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.403 [2024-07-25 12:04:27.462131] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1444700 00:12:20.403 [2024-07-25 12:04:27.462140] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.403 [2024-07-25 12:04:27.463322] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.403 [2024-07-25 12:04:27.463344] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:20.403 pt3 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.403 12:04:27 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:12:20.404 malloc4 00:12:20.404 12:04:27 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:20.663 [2024-07-25 12:04:27.799932] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:20.663 [2024-07-25 12:04:27.799967] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.663 [2024-07-25 12:04:27.799983] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1446e00 00:12:20.663 [2024-07-25 12:04:27.799991] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.663 [2024-07-25 12:04:27.801153] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.663 [2024-07-25 12:04:27.801174] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:20.663 pt4 00:12:20.663 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:20.663 12:04:27 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:20.663 12:04:27 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:12:20.663 [2024-07-25 12:04:27.968409] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:20.663 [2024-07-25 12:04:27.969390] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:20.663 [2024-07-25 12:04:27.969427] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:20.663 [2024-07-25 12:04:27.969454] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:20.663 [2024-07-25 12:04:27.969594] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x14429f0 00:12:20.663 [2024-07-25 12:04:27.969602] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:20.663 [2024-07-25 12:04:27.969741] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x144a580 00:12:20.663 [2024-07-25 12:04:27.969836] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x14429f0 00:12:20.663 [2024-07-25 12:04:27.969842] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x14429f0 00:12:20.663 [2024-07-25 12:04:27.969909] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:20.921 12:04:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:20.921 12:04:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:20.921 "name": "raid_bdev1", 00:12:20.921 "uuid": "fc0b79a9-ae38-4b31-b2af-8bd1493c34f3", 00:12:20.921 "strip_size_kb": 64, 00:12:20.921 "state": "online", 00:12:20.921 "raid_level": "raid0", 00:12:20.921 "superblock": true, 00:12:20.921 "num_base_bdevs": 4, 00:12:20.921 "num_base_bdevs_discovered": 4, 00:12:20.921 "num_base_bdevs_operational": 4, 00:12:20.921 "base_bdevs_list": [ 00:12:20.921 { 00:12:20.921 "name": "pt1", 00:12:20.921 "uuid": "64f216f9-62b3-5197-98ea-cfd7c0b53ffb", 00:12:20.921 "is_configured": true, 00:12:20.921 "data_offset": 2048, 00:12:20.921 "data_size": 63488 00:12:20.921 }, 00:12:20.921 { 00:12:20.921 "name": "pt2", 00:12:20.921 "uuid": "f722913c-ea2c-505b-b03e-73160787d65b", 00:12:20.921 "is_configured": true, 00:12:20.921 "data_offset": 2048, 00:12:20.921 "data_size": 63488 00:12:20.921 }, 00:12:20.921 { 00:12:20.921 "name": "pt3", 00:12:20.921 "uuid": "dde9dc64-b953-5734-a66a-f7825304b9af", 00:12:20.921 "is_configured": true, 00:12:20.921 "data_offset": 2048, 00:12:20.921 "data_size": 63488 00:12:20.921 }, 00:12:20.921 { 00:12:20.921 "name": "pt4", 00:12:20.921 "uuid": "5541515b-eafb-5ed5-9d5c-153df38bf713", 00:12:20.921 "is_configured": true, 00:12:20.921 "data_offset": 2048, 00:12:20.921 "data_size": 63488 00:12:20.921 } 00:12:20.921 ] 00:12:20.921 }' 00:12:20.921 12:04:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:20.921 12:04:28 -- common/autotest_common.sh@10 -- # set +x 00:12:21.491 12:04:28 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:21.491 12:04:28 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:21.491 [2024-07-25 12:04:28.774603] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:21.491 12:04:28 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=fc0b79a9-ae38-4b31-b2af-8bd1493c34f3 00:12:21.491 12:04:28 -- bdev/bdev_raid.sh@380 -- # '[' -z fc0b79a9-ae38-4b31-b2af-8bd1493c34f3 ']' 00:12:21.492 12:04:28 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:21.749 [2024-07-25 12:04:28.950888] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:21.749 [2024-07-25 12:04:28.950905] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:21.749 [2024-07-25 12:04:28.950939] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:21.749 [2024-07-25 12:04:28.950977] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:21.749 [2024-07-25 12:04:28.950985] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x14429f0 name raid_bdev1, state offline 00:12:21.749 12:04:28 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.749 12:04:28 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.007 12:04:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:22.265 12:04:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.265 12:04:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:22.523 12:04:29 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:22.523 12:04:29 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:12:22.523 12:04:29 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:22.523 12:04:29 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:22.782 12:04:29 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:22.782 12:04:29 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:22.782 12:04:29 -- common/autotest_common.sh@640 -- # local es=0 00:12:22.782 12:04:29 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:22.782 12:04:29 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:22.782 12:04:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.782 12:04:29 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:22.782 12:04:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.782 12:04:29 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:22.782 12:04:29 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.782 12:04:29 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:22.782 12:04:29 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:22.782 12:04:29 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:23.041 [2024-07-25 12:04:30.133922] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.041 [2024-07-25 12:04:30.135005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.041 [2024-07-25 12:04:30.135036] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:23.041 [2024-07-25 12:04:30.135058] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:23.041 [2024-07-25 12:04:30.135092] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:23.041 [2024-07-25 12:04:30.135121] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:23.041 [2024-07-25 12:04:30.135136] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:12:23.041 [2024-07-25 12:04:30.135151] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:12:23.041 [2024-07-25 12:04:30.135164] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.041 [2024-07-25 12:04:30.135171] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x129d1c0 name raid_bdev1, state configuring 00:12:23.041 request: 00:12:23.041 { 00:12:23.041 "name": "raid_bdev1", 00:12:23.041 "raid_level": "raid0", 00:12:23.041 "base_bdevs": [ 00:12:23.041 "malloc1", 00:12:23.041 "malloc2", 00:12:23.041 "malloc3", 00:12:23.041 "malloc4" 00:12:23.041 ], 00:12:23.041 "superblock": false, 00:12:23.041 "strip_size_kb": 64, 00:12:23.041 "method": "bdev_raid_create", 00:12:23.041 "req_id": 1 00:12:23.041 } 00:12:23.041 Got JSON-RPC error response 00:12:23.041 response: 00:12:23.041 { 00:12:23.041 "code": -17, 00:12:23.041 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.041 } 00:12:23.041 12:04:30 -- common/autotest_common.sh@643 -- # es=1 00:12:23.041 12:04:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:23.041 12:04:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:23.041 12:04:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:23.041 12:04:30 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.041 12:04:30 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:23.041 12:04:30 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:23.041 12:04:30 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:23.041 12:04:30 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.299 [2024-07-25 12:04:30.482785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.299 [2024-07-25 12:04:30.482816] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.299 [2024-07-25 12:04:30.482833] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14457c0 00:12:23.299 [2024-07-25 12:04:30.482841] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.299 [2024-07-25 12:04:30.484089] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.299 [2024-07-25 12:04:30.484112] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.299 [2024-07-25 12:04:30.484160] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:23.299 [2024-07-25 12:04:30.484178] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:23.299 pt1 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.299 12:04:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.557 12:04:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:23.557 "name": "raid_bdev1", 00:12:23.557 "uuid": "fc0b79a9-ae38-4b31-b2af-8bd1493c34f3", 00:12:23.557 "strip_size_kb": 64, 00:12:23.557 "state": "configuring", 00:12:23.557 "raid_level": "raid0", 00:12:23.557 "superblock": true, 00:12:23.557 "num_base_bdevs": 4, 00:12:23.557 "num_base_bdevs_discovered": 1, 00:12:23.557 "num_base_bdevs_operational": 4, 00:12:23.557 "base_bdevs_list": [ 00:12:23.557 { 00:12:23.557 "name": "pt1", 00:12:23.557 "uuid": "64f216f9-62b3-5197-98ea-cfd7c0b53ffb", 00:12:23.557 "is_configured": true, 00:12:23.557 "data_offset": 2048, 00:12:23.557 "data_size": 63488 00:12:23.557 }, 00:12:23.557 { 00:12:23.557 "name": null, 00:12:23.557 "uuid": "f722913c-ea2c-505b-b03e-73160787d65b", 00:12:23.557 "is_configured": false, 00:12:23.557 "data_offset": 2048, 00:12:23.557 "data_size": 63488 00:12:23.557 }, 00:12:23.557 { 00:12:23.557 "name": null, 00:12:23.557 "uuid": "dde9dc64-b953-5734-a66a-f7825304b9af", 00:12:23.557 "is_configured": false, 00:12:23.557 "data_offset": 2048, 00:12:23.557 "data_size": 63488 00:12:23.557 }, 00:12:23.557 { 00:12:23.557 "name": null, 00:12:23.557 "uuid": "5541515b-eafb-5ed5-9d5c-153df38bf713", 00:12:23.557 "is_configured": false, 00:12:23.557 "data_offset": 2048, 00:12:23.557 "data_size": 63488 00:12:23.557 } 00:12:23.557 ] 00:12:23.557 }' 00:12:23.557 12:04:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:23.557 12:04:30 -- common/autotest_common.sh@10 -- # set +x 00:12:24.123 12:04:31 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:12:24.123 12:04:31 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.123 [2024-07-25 12:04:31.288870] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.123 [2024-07-25 12:04:31.288911] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.123 [2024-07-25 12:04:31.288926] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1448be0 00:12:24.123 [2024-07-25 12:04:31.288935] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.123 [2024-07-25 12:04:31.289182] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.123 [2024-07-25 12:04:31.289193] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.123 [2024-07-25 12:04:31.289239] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:24.123 [2024-07-25 12:04:31.289251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.123 pt2 00:12:24.123 12:04:31 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:24.381 [2024-07-25 12:04:31.457334] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:24.381 "name": "raid_bdev1", 00:12:24.381 "uuid": "fc0b79a9-ae38-4b31-b2af-8bd1493c34f3", 00:12:24.381 "strip_size_kb": 64, 00:12:24.381 "state": "configuring", 00:12:24.381 "raid_level": "raid0", 00:12:24.381 "superblock": true, 00:12:24.381 "num_base_bdevs": 4, 00:12:24.381 "num_base_bdevs_discovered": 1, 00:12:24.381 "num_base_bdevs_operational": 4, 00:12:24.381 "base_bdevs_list": [ 00:12:24.381 { 00:12:24.381 "name": "pt1", 00:12:24.381 "uuid": "64f216f9-62b3-5197-98ea-cfd7c0b53ffb", 00:12:24.381 "is_configured": true, 00:12:24.381 "data_offset": 2048, 00:12:24.381 "data_size": 63488 00:12:24.381 }, 00:12:24.381 { 00:12:24.381 "name": null, 00:12:24.381 "uuid": "f722913c-ea2c-505b-b03e-73160787d65b", 00:12:24.381 "is_configured": false, 00:12:24.381 "data_offset": 2048, 00:12:24.381 "data_size": 63488 00:12:24.381 }, 00:12:24.381 { 00:12:24.381 "name": null, 00:12:24.381 "uuid": "dde9dc64-b953-5734-a66a-f7825304b9af", 00:12:24.381 "is_configured": false, 00:12:24.381 "data_offset": 2048, 00:12:24.381 "data_size": 63488 00:12:24.381 }, 00:12:24.381 { 00:12:24.381 "name": null, 00:12:24.381 "uuid": "5541515b-eafb-5ed5-9d5c-153df38bf713", 00:12:24.381 "is_configured": false, 00:12:24.381 "data_offset": 2048, 00:12:24.381 "data_size": 63488 00:12:24.381 } 00:12:24.381 ] 00:12:24.381 }' 00:12:24.381 12:04:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:24.381 12:04:31 -- common/autotest_common.sh@10 -- # set +x 00:12:24.949 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:24.949 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:24.949 12:04:32 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.949 [2024-07-25 12:04:32.251361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.949 [2024-07-25 12:04:32.251397] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.949 [2024-07-25 12:04:32.251412] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1448e50 00:12:24.949 [2024-07-25 12:04:32.251421] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.949 [2024-07-25 12:04:32.251668] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.949 [2024-07-25 12:04:32.251680] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.949 [2024-07-25 12:04:32.251725] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:24.949 [2024-07-25 12:04:32.251737] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.949 pt2 00:12:25.207 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:25.207 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:25.207 12:04:32 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:25.207 [2024-07-25 12:04:32.423801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:25.207 [2024-07-25 12:04:32.423821] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.207 [2024-07-25 12:04:32.423831] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x14481b0 00:12:25.207 [2024-07-25 12:04:32.423839] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.208 [2024-07-25 12:04:32.424019] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.208 [2024-07-25 12:04:32.424029] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:25.208 [2024-07-25 12:04:32.424061] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:25.208 [2024-07-25 12:04:32.424071] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:25.208 pt3 00:12:25.208 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:25.208 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:25.208 12:04:32 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:25.466 [2024-07-25 12:04:32.584215] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:25.466 [2024-07-25 12:04:32.584236] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.466 [2024-07-25 12:04:32.584247] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1443340 00:12:25.466 [2024-07-25 12:04:32.584254] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.466 [2024-07-25 12:04:32.584450] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.466 [2024-07-25 12:04:32.584461] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:25.466 [2024-07-25 12:04:32.584510] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:12:25.466 [2024-07-25 12:04:32.584522] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:25.466 [2024-07-25 12:04:32.584597] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1449320 00:12:25.466 [2024-07-25 12:04:32.584604] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:25.466 [2024-07-25 12:04:32.584713] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x143aa10 00:12:25.466 [2024-07-25 12:04:32.584796] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1449320 00:12:25.467 [2024-07-25 12:04:32.584802] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1449320 00:12:25.467 [2024-07-25 12:04:32.584863] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.467 pt4 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid0 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.467 12:04:32 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.725 12:04:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:25.725 "name": "raid_bdev1", 00:12:25.725 "uuid": "fc0b79a9-ae38-4b31-b2af-8bd1493c34f3", 00:12:25.725 "strip_size_kb": 64, 00:12:25.725 "state": "online", 00:12:25.725 "raid_level": "raid0", 00:12:25.725 "superblock": true, 00:12:25.725 "num_base_bdevs": 4, 00:12:25.725 "num_base_bdevs_discovered": 4, 00:12:25.725 "num_base_bdevs_operational": 4, 00:12:25.725 "base_bdevs_list": [ 00:12:25.725 { 00:12:25.725 "name": "pt1", 00:12:25.725 "uuid": "64f216f9-62b3-5197-98ea-cfd7c0b53ffb", 00:12:25.725 "is_configured": true, 00:12:25.725 "data_offset": 2048, 00:12:25.725 "data_size": 63488 00:12:25.725 }, 00:12:25.725 { 00:12:25.725 "name": "pt2", 00:12:25.725 "uuid": "f722913c-ea2c-505b-b03e-73160787d65b", 00:12:25.725 "is_configured": true, 00:12:25.725 "data_offset": 2048, 00:12:25.725 "data_size": 63488 00:12:25.725 }, 00:12:25.725 { 00:12:25.725 "name": "pt3", 00:12:25.725 "uuid": "dde9dc64-b953-5734-a66a-f7825304b9af", 00:12:25.725 "is_configured": true, 00:12:25.725 "data_offset": 2048, 00:12:25.725 "data_size": 63488 00:12:25.725 }, 00:12:25.725 { 00:12:25.725 "name": "pt4", 00:12:25.725 "uuid": "5541515b-eafb-5ed5-9d5c-153df38bf713", 00:12:25.725 "is_configured": true, 00:12:25.725 "data_offset": 2048, 00:12:25.725 "data_size": 63488 00:12:25.725 } 00:12:25.725 ] 00:12:25.725 }' 00:12:25.725 12:04:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:25.725 12:04:32 -- common/autotest_common.sh@10 -- # set +x 00:12:25.984 12:04:33 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:25.984 12:04:33 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:26.245 [2024-07-25 12:04:33.434576] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.245 12:04:33 -- bdev/bdev_raid.sh@430 -- # '[' fc0b79a9-ae38-4b31-b2af-8bd1493c34f3 '!=' fc0b79a9-ae38-4b31-b2af-8bd1493c34f3 ']' 00:12:26.245 12:04:33 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid0 00:12:26.245 12:04:33 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:26.245 12:04:33 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:26.245 12:04:33 -- bdev/bdev_raid.sh@511 -- # killprocess 1243203 00:12:26.245 12:04:33 -- common/autotest_common.sh@926 -- # '[' -z 1243203 ']' 00:12:26.245 12:04:33 -- common/autotest_common.sh@930 -- # kill -0 1243203 00:12:26.245 12:04:33 -- common/autotest_common.sh@931 -- # uname 00:12:26.245 12:04:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:26.245 12:04:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1243203 00:12:26.245 12:04:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:26.245 12:04:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:26.245 12:04:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1243203' 00:12:26.245 killing process with pid 1243203 00:12:26.245 12:04:33 -- common/autotest_common.sh@945 -- # kill 1243203 00:12:26.245 [2024-07-25 12:04:33.502550] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:26.245 [2024-07-25 12:04:33.502601] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:26.245 12:04:33 -- common/autotest_common.sh@950 -- # wait 1243203 00:12:26.245 [2024-07-25 12:04:33.502642] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:26.245 [2024-07-25 12:04:33.502650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1449320 name raid_bdev1, state offline 00:12:26.245 [2024-07-25 12:04:33.538063] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:26.506 00:12:26.506 real 0m8.150s 00:12:26.506 user 0m14.068s 00:12:26.506 sys 0m1.659s 00:12:26.506 12:04:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:26.506 12:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:26.506 ************************************ 00:12:26.506 END TEST raid_superblock_test 00:12:26.506 ************************************ 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:26.506 12:04:33 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:26.506 12:04:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:26.506 12:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:26.506 ************************************ 00:12:26.506 START TEST raid_state_function_test 00:12:26.506 ************************************ 00:12:26.506 12:04:33 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 false 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@226 -- # raid_pid=1244459 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1244459' 00:12:26.506 Process raid pid: 1244459 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:26.506 12:04:33 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1244459 /var/tmp/spdk-raid.sock 00:12:26.506 12:04:33 -- common/autotest_common.sh@819 -- # '[' -z 1244459 ']' 00:12:26.506 12:04:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:26.506 12:04:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:26.506 12:04:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:26.506 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:26.506 12:04:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:26.506 12:04:33 -- common/autotest_common.sh@10 -- # set +x 00:12:26.765 [2024-07-25 12:04:33.825351] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:26.765 [2024-07-25 12:04:33.825399] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.765 [2024-07-25 12:04:33.908682] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.765 [2024-07-25 12:04:33.995304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.765 [2024-07-25 12:04:34.055947] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:26.765 [2024-07-25 12:04:34.055965] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.330 12:04:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:27.330 12:04:34 -- common/autotest_common.sh@852 -- # return 0 00:12:27.330 12:04:34 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:27.587 [2024-07-25 12:04:34.756286] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.587 [2024-07-25 12:04:34.756317] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.587 [2024-07-25 12:04:34.756323] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:27.587 [2024-07-25 12:04:34.756331] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:27.587 [2024-07-25 12:04:34.756336] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:27.587 [2024-07-25 12:04:34.756343] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:27.587 [2024-07-25 12:04:34.756348] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:27.587 [2024-07-25 12:04:34.756355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.587 12:04:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.845 12:04:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:27.845 "name": "Existed_Raid", 00:12:27.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.845 "strip_size_kb": 64, 00:12:27.845 "state": "configuring", 00:12:27.845 "raid_level": "concat", 00:12:27.845 "superblock": false, 00:12:27.845 "num_base_bdevs": 4, 00:12:27.845 "num_base_bdevs_discovered": 0, 00:12:27.845 "num_base_bdevs_operational": 4, 00:12:27.845 "base_bdevs_list": [ 00:12:27.845 { 00:12:27.845 "name": "BaseBdev1", 00:12:27.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.845 "is_configured": false, 00:12:27.845 "data_offset": 0, 00:12:27.845 "data_size": 0 00:12:27.845 }, 00:12:27.845 { 00:12:27.845 "name": "BaseBdev2", 00:12:27.845 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.845 "is_configured": false, 00:12:27.845 "data_offset": 0, 00:12:27.845 "data_size": 0 00:12:27.845 }, 00:12:27.845 { 00:12:27.845 "name": "BaseBdev3", 00:12:27.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.846 "is_configured": false, 00:12:27.846 "data_offset": 0, 00:12:27.846 "data_size": 0 00:12:27.846 }, 00:12:27.846 { 00:12:27.846 "name": "BaseBdev4", 00:12:27.846 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.846 "is_configured": false, 00:12:27.846 "data_offset": 0, 00:12:27.846 "data_size": 0 00:12:27.846 } 00:12:27.846 ] 00:12:27.846 }' 00:12:27.846 12:04:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:27.846 12:04:34 -- common/autotest_common.sh@10 -- # set +x 00:12:28.411 12:04:35 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:28.411 [2024-07-25 12:04:35.582313] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:28.411 [2024-07-25 12:04:35.582333] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18ccd80 name Existed_Raid, state configuring 00:12:28.411 12:04:35 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:28.668 [2024-07-25 12:04:35.750767] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:28.668 [2024-07-25 12:04:35.750786] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:28.668 [2024-07-25 12:04:35.750793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:28.668 [2024-07-25 12:04:35.750800] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:28.668 [2024-07-25 12:04:35.750805] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:28.668 [2024-07-25 12:04:35.750813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:28.668 [2024-07-25 12:04:35.750818] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:28.668 [2024-07-25 12:04:35.750826] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:28.668 12:04:35 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.668 [2024-07-25 12:04:35.935842] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.668 BaseBdev1 00:12:28.668 12:04:35 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:28.668 12:04:35 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:28.668 12:04:35 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:28.668 12:04:35 -- common/autotest_common.sh@889 -- # local i 00:12:28.668 12:04:35 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:28.668 12:04:35 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:28.668 12:04:35 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:28.926 12:04:36 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:29.184 [ 00:12:29.184 { 00:12:29.184 "name": "BaseBdev1", 00:12:29.184 "aliases": [ 00:12:29.184 "2f5db92d-efcc-4909-b9f0-9400ef138737" 00:12:29.184 ], 00:12:29.184 "product_name": "Malloc disk", 00:12:29.184 "block_size": 512, 00:12:29.184 "num_blocks": 65536, 00:12:29.184 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:29.184 "assigned_rate_limits": { 00:12:29.184 "rw_ios_per_sec": 0, 00:12:29.184 "rw_mbytes_per_sec": 0, 00:12:29.184 "r_mbytes_per_sec": 0, 00:12:29.184 "w_mbytes_per_sec": 0 00:12:29.184 }, 00:12:29.184 "claimed": true, 00:12:29.184 "claim_type": "exclusive_write", 00:12:29.184 "zoned": false, 00:12:29.184 "supported_io_types": { 00:12:29.184 "read": true, 00:12:29.184 "write": true, 00:12:29.184 "unmap": true, 00:12:29.184 "write_zeroes": true, 00:12:29.184 "flush": true, 00:12:29.184 "reset": true, 00:12:29.184 "compare": false, 00:12:29.184 "compare_and_write": false, 00:12:29.184 "abort": true, 00:12:29.184 "nvme_admin": false, 00:12:29.184 "nvme_io": false 00:12:29.184 }, 00:12:29.184 "memory_domains": [ 00:12:29.184 { 00:12:29.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.184 "dma_device_type": 2 00:12:29.184 } 00:12:29.184 ], 00:12:29.184 "driver_specific": {} 00:12:29.184 } 00:12:29.184 ] 00:12:29.184 12:04:36 -- common/autotest_common.sh@895 -- # return 0 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:29.184 "name": "Existed_Raid", 00:12:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.184 "strip_size_kb": 64, 00:12:29.184 "state": "configuring", 00:12:29.184 "raid_level": "concat", 00:12:29.184 "superblock": false, 00:12:29.184 "num_base_bdevs": 4, 00:12:29.184 "num_base_bdevs_discovered": 1, 00:12:29.184 "num_base_bdevs_operational": 4, 00:12:29.184 "base_bdevs_list": [ 00:12:29.184 { 00:12:29.184 "name": "BaseBdev1", 00:12:29.184 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:29.184 "is_configured": true, 00:12:29.184 "data_offset": 0, 00:12:29.184 "data_size": 65536 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "BaseBdev2", 00:12:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.184 "is_configured": false, 00:12:29.184 "data_offset": 0, 00:12:29.184 "data_size": 0 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "BaseBdev3", 00:12:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.184 "is_configured": false, 00:12:29.184 "data_offset": 0, 00:12:29.184 "data_size": 0 00:12:29.184 }, 00:12:29.184 { 00:12:29.184 "name": "BaseBdev4", 00:12:29.184 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.184 "is_configured": false, 00:12:29.184 "data_offset": 0, 00:12:29.184 "data_size": 0 00:12:29.184 } 00:12:29.184 ] 00:12:29.184 }' 00:12:29.184 12:04:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:29.184 12:04:36 -- common/autotest_common.sh@10 -- # set +x 00:12:29.751 12:04:36 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:30.010 [2024-07-25 12:04:37.086812] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:30.010 [2024-07-25 12:04:37.086845] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18cd000 name Existed_Raid, state configuring 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:30.010 [2024-07-25 12:04:37.247251] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:30.010 [2024-07-25 12:04:37.248328] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:30.010 [2024-07-25 12:04:37.248355] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:30.010 [2024-07-25 12:04:37.248361] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:30.010 [2024-07-25 12:04:37.248369] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:30.010 [2024-07-25 12:04:37.248374] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:30.010 [2024-07-25 12:04:37.248385] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.010 12:04:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.268 12:04:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:30.268 "name": "Existed_Raid", 00:12:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.268 "strip_size_kb": 64, 00:12:30.268 "state": "configuring", 00:12:30.268 "raid_level": "concat", 00:12:30.268 "superblock": false, 00:12:30.268 "num_base_bdevs": 4, 00:12:30.268 "num_base_bdevs_discovered": 1, 00:12:30.268 "num_base_bdevs_operational": 4, 00:12:30.268 "base_bdevs_list": [ 00:12:30.268 { 00:12:30.268 "name": "BaseBdev1", 00:12:30.268 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:30.268 "is_configured": true, 00:12:30.268 "data_offset": 0, 00:12:30.268 "data_size": 65536 00:12:30.268 }, 00:12:30.268 { 00:12:30.268 "name": "BaseBdev2", 00:12:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.268 "is_configured": false, 00:12:30.268 "data_offset": 0, 00:12:30.268 "data_size": 0 00:12:30.268 }, 00:12:30.268 { 00:12:30.268 "name": "BaseBdev3", 00:12:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.268 "is_configured": false, 00:12:30.268 "data_offset": 0, 00:12:30.268 "data_size": 0 00:12:30.268 }, 00:12:30.268 { 00:12:30.268 "name": "BaseBdev4", 00:12:30.268 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.268 "is_configured": false, 00:12:30.268 "data_offset": 0, 00:12:30.268 "data_size": 0 00:12:30.268 } 00:12:30.268 ] 00:12:30.268 }' 00:12:30.268 12:04:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:30.268 12:04:37 -- common/autotest_common.sh@10 -- # set +x 00:12:30.835 12:04:37 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:30.835 [2024-07-25 12:04:38.056318] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.835 BaseBdev2 00:12:30.835 12:04:38 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:30.835 12:04:38 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:30.835 12:04:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:30.835 12:04:38 -- common/autotest_common.sh@889 -- # local i 00:12:30.835 12:04:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:30.835 12:04:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:30.835 12:04:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:31.093 12:04:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:31.351 [ 00:12:31.351 { 00:12:31.351 "name": "BaseBdev2", 00:12:31.351 "aliases": [ 00:12:31.351 "5454e4a6-be7e-4f4b-9ff6-1e61b3398476" 00:12:31.351 ], 00:12:31.351 "product_name": "Malloc disk", 00:12:31.351 "block_size": 512, 00:12:31.351 "num_blocks": 65536, 00:12:31.351 "uuid": "5454e4a6-be7e-4f4b-9ff6-1e61b3398476", 00:12:31.351 "assigned_rate_limits": { 00:12:31.351 "rw_ios_per_sec": 0, 00:12:31.351 "rw_mbytes_per_sec": 0, 00:12:31.351 "r_mbytes_per_sec": 0, 00:12:31.351 "w_mbytes_per_sec": 0 00:12:31.351 }, 00:12:31.351 "claimed": true, 00:12:31.351 "claim_type": "exclusive_write", 00:12:31.351 "zoned": false, 00:12:31.351 "supported_io_types": { 00:12:31.351 "read": true, 00:12:31.351 "write": true, 00:12:31.351 "unmap": true, 00:12:31.351 "write_zeroes": true, 00:12:31.351 "flush": true, 00:12:31.351 "reset": true, 00:12:31.351 "compare": false, 00:12:31.351 "compare_and_write": false, 00:12:31.351 "abort": true, 00:12:31.351 "nvme_admin": false, 00:12:31.351 "nvme_io": false 00:12:31.351 }, 00:12:31.351 "memory_domains": [ 00:12:31.351 { 00:12:31.351 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.351 "dma_device_type": 2 00:12:31.351 } 00:12:31.351 ], 00:12:31.351 "driver_specific": {} 00:12:31.351 } 00:12:31.351 ] 00:12:31.351 12:04:38 -- common/autotest_common.sh@895 -- # return 0 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:31.351 "name": "Existed_Raid", 00:12:31.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.351 "strip_size_kb": 64, 00:12:31.351 "state": "configuring", 00:12:31.351 "raid_level": "concat", 00:12:31.351 "superblock": false, 00:12:31.351 "num_base_bdevs": 4, 00:12:31.351 "num_base_bdevs_discovered": 2, 00:12:31.351 "num_base_bdevs_operational": 4, 00:12:31.351 "base_bdevs_list": [ 00:12:31.351 { 00:12:31.351 "name": "BaseBdev1", 00:12:31.351 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:31.351 "is_configured": true, 00:12:31.351 "data_offset": 0, 00:12:31.351 "data_size": 65536 00:12:31.351 }, 00:12:31.351 { 00:12:31.351 "name": "BaseBdev2", 00:12:31.351 "uuid": "5454e4a6-be7e-4f4b-9ff6-1e61b3398476", 00:12:31.351 "is_configured": true, 00:12:31.351 "data_offset": 0, 00:12:31.351 "data_size": 65536 00:12:31.351 }, 00:12:31.351 { 00:12:31.351 "name": "BaseBdev3", 00:12:31.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.351 "is_configured": false, 00:12:31.351 "data_offset": 0, 00:12:31.351 "data_size": 0 00:12:31.351 }, 00:12:31.351 { 00:12:31.351 "name": "BaseBdev4", 00:12:31.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:31.351 "is_configured": false, 00:12:31.351 "data_offset": 0, 00:12:31.351 "data_size": 0 00:12:31.351 } 00:12:31.351 ] 00:12:31.351 }' 00:12:31.351 12:04:38 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:31.351 12:04:38 -- common/autotest_common.sh@10 -- # set +x 00:12:31.924 12:04:39 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:31.924 [2024-07-25 12:04:39.226371] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:31.924 BaseBdev3 00:12:32.199 12:04:39 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:32.199 12:04:39 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:32.199 12:04:39 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:32.199 12:04:39 -- common/autotest_common.sh@889 -- # local i 00:12:32.199 12:04:39 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:32.199 12:04:39 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:32.199 12:04:39 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:32.199 12:04:39 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:32.464 [ 00:12:32.464 { 00:12:32.464 "name": "BaseBdev3", 00:12:32.464 "aliases": [ 00:12:32.464 "4f64744e-e616-4fb2-b4f5-c6e31b561bce" 00:12:32.464 ], 00:12:32.464 "product_name": "Malloc disk", 00:12:32.464 "block_size": 512, 00:12:32.464 "num_blocks": 65536, 00:12:32.465 "uuid": "4f64744e-e616-4fb2-b4f5-c6e31b561bce", 00:12:32.465 "assigned_rate_limits": { 00:12:32.465 "rw_ios_per_sec": 0, 00:12:32.465 "rw_mbytes_per_sec": 0, 00:12:32.465 "r_mbytes_per_sec": 0, 00:12:32.465 "w_mbytes_per_sec": 0 00:12:32.465 }, 00:12:32.465 "claimed": true, 00:12:32.465 "claim_type": "exclusive_write", 00:12:32.465 "zoned": false, 00:12:32.465 "supported_io_types": { 00:12:32.465 "read": true, 00:12:32.465 "write": true, 00:12:32.465 "unmap": true, 00:12:32.465 "write_zeroes": true, 00:12:32.465 "flush": true, 00:12:32.465 "reset": true, 00:12:32.465 "compare": false, 00:12:32.465 "compare_and_write": false, 00:12:32.465 "abort": true, 00:12:32.465 "nvme_admin": false, 00:12:32.465 "nvme_io": false 00:12:32.465 }, 00:12:32.465 "memory_domains": [ 00:12:32.465 { 00:12:32.465 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:32.465 "dma_device_type": 2 00:12:32.465 } 00:12:32.465 ], 00:12:32.465 "driver_specific": {} 00:12:32.465 } 00:12:32.465 ] 00:12:32.465 12:04:39 -- common/autotest_common.sh@895 -- # return 0 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:32.465 "name": "Existed_Raid", 00:12:32.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.465 "strip_size_kb": 64, 00:12:32.465 "state": "configuring", 00:12:32.465 "raid_level": "concat", 00:12:32.465 "superblock": false, 00:12:32.465 "num_base_bdevs": 4, 00:12:32.465 "num_base_bdevs_discovered": 3, 00:12:32.465 "num_base_bdevs_operational": 4, 00:12:32.465 "base_bdevs_list": [ 00:12:32.465 { 00:12:32.465 "name": "BaseBdev1", 00:12:32.465 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 0, 00:12:32.465 "data_size": 65536 00:12:32.465 }, 00:12:32.465 { 00:12:32.465 "name": "BaseBdev2", 00:12:32.465 "uuid": "5454e4a6-be7e-4f4b-9ff6-1e61b3398476", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 0, 00:12:32.465 "data_size": 65536 00:12:32.465 }, 00:12:32.465 { 00:12:32.465 "name": "BaseBdev3", 00:12:32.465 "uuid": "4f64744e-e616-4fb2-b4f5-c6e31b561bce", 00:12:32.465 "is_configured": true, 00:12:32.465 "data_offset": 0, 00:12:32.465 "data_size": 65536 00:12:32.465 }, 00:12:32.465 { 00:12:32.465 "name": "BaseBdev4", 00:12:32.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.465 "is_configured": false, 00:12:32.465 "data_offset": 0, 00:12:32.465 "data_size": 0 00:12:32.465 } 00:12:32.465 ] 00:12:32.465 }' 00:12:32.465 12:04:39 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:32.465 12:04:39 -- common/autotest_common.sh@10 -- # set +x 00:12:33.032 12:04:40 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:33.291 [2024-07-25 12:04:40.408451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.291 [2024-07-25 12:04:40.408482] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x18cc5f0 00:12:33.291 [2024-07-25 12:04:40.408488] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:33.291 [2024-07-25 12:04:40.408673] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x18d0e40 00:12:33.291 [2024-07-25 12:04:40.408758] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x18cc5f0 00:12:33.291 [2024-07-25 12:04:40.408764] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x18cc5f0 00:12:33.291 [2024-07-25 12:04:40.408909] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.291 BaseBdev4 00:12:33.291 12:04:40 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:12:33.291 12:04:40 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:12:33.291 12:04:40 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:33.291 12:04:40 -- common/autotest_common.sh@889 -- # local i 00:12:33.291 12:04:40 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:33.291 12:04:40 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:33.291 12:04:40 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:33.291 12:04:40 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:33.550 [ 00:12:33.550 { 00:12:33.550 "name": "BaseBdev4", 00:12:33.550 "aliases": [ 00:12:33.550 "fdad9825-25fe-453a-98a6-24e83549eb2d" 00:12:33.550 ], 00:12:33.550 "product_name": "Malloc disk", 00:12:33.550 "block_size": 512, 00:12:33.550 "num_blocks": 65536, 00:12:33.550 "uuid": "fdad9825-25fe-453a-98a6-24e83549eb2d", 00:12:33.550 "assigned_rate_limits": { 00:12:33.550 "rw_ios_per_sec": 0, 00:12:33.550 "rw_mbytes_per_sec": 0, 00:12:33.550 "r_mbytes_per_sec": 0, 00:12:33.550 "w_mbytes_per_sec": 0 00:12:33.550 }, 00:12:33.550 "claimed": true, 00:12:33.550 "claim_type": "exclusive_write", 00:12:33.550 "zoned": false, 00:12:33.550 "supported_io_types": { 00:12:33.550 "read": true, 00:12:33.550 "write": true, 00:12:33.550 "unmap": true, 00:12:33.550 "write_zeroes": true, 00:12:33.550 "flush": true, 00:12:33.550 "reset": true, 00:12:33.550 "compare": false, 00:12:33.550 "compare_and_write": false, 00:12:33.550 "abort": true, 00:12:33.550 "nvme_admin": false, 00:12:33.550 "nvme_io": false 00:12:33.550 }, 00:12:33.550 "memory_domains": [ 00:12:33.550 { 00:12:33.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:33.550 "dma_device_type": 2 00:12:33.550 } 00:12:33.550 ], 00:12:33.550 "driver_specific": {} 00:12:33.550 } 00:12:33.550 ] 00:12:33.550 12:04:40 -- common/autotest_common.sh@895 -- # return 0 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:33.550 12:04:40 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.810 12:04:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:33.810 "name": "Existed_Raid", 00:12:33.810 "uuid": "592645ce-15eb-407a-a623-e7de1254d2e1", 00:12:33.810 "strip_size_kb": 64, 00:12:33.810 "state": "online", 00:12:33.810 "raid_level": "concat", 00:12:33.810 "superblock": false, 00:12:33.810 "num_base_bdevs": 4, 00:12:33.810 "num_base_bdevs_discovered": 4, 00:12:33.810 "num_base_bdevs_operational": 4, 00:12:33.810 "base_bdevs_list": [ 00:12:33.810 { 00:12:33.810 "name": "BaseBdev1", 00:12:33.810 "uuid": "2f5db92d-efcc-4909-b9f0-9400ef138737", 00:12:33.810 "is_configured": true, 00:12:33.810 "data_offset": 0, 00:12:33.810 "data_size": 65536 00:12:33.810 }, 00:12:33.810 { 00:12:33.810 "name": "BaseBdev2", 00:12:33.810 "uuid": "5454e4a6-be7e-4f4b-9ff6-1e61b3398476", 00:12:33.810 "is_configured": true, 00:12:33.810 "data_offset": 0, 00:12:33.810 "data_size": 65536 00:12:33.810 }, 00:12:33.810 { 00:12:33.810 "name": "BaseBdev3", 00:12:33.810 "uuid": "4f64744e-e616-4fb2-b4f5-c6e31b561bce", 00:12:33.810 "is_configured": true, 00:12:33.810 "data_offset": 0, 00:12:33.810 "data_size": 65536 00:12:33.810 }, 00:12:33.810 { 00:12:33.810 "name": "BaseBdev4", 00:12:33.810 "uuid": "fdad9825-25fe-453a-98a6-24e83549eb2d", 00:12:33.810 "is_configured": true, 00:12:33.810 "data_offset": 0, 00:12:33.810 "data_size": 65536 00:12:33.810 } 00:12:33.810 ] 00:12:33.810 }' 00:12:33.810 12:04:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:33.810 12:04:40 -- common/autotest_common.sh@10 -- # set +x 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:34.377 [2024-07-25 12:04:41.571480] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:34.377 [2024-07-25 12:04:41.571503] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:34.377 [2024-07-25 12:04:41.571536] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.377 12:04:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.636 12:04:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:34.636 "name": "Existed_Raid", 00:12:34.636 "uuid": "592645ce-15eb-407a-a623-e7de1254d2e1", 00:12:34.636 "strip_size_kb": 64, 00:12:34.636 "state": "offline", 00:12:34.636 "raid_level": "concat", 00:12:34.636 "superblock": false, 00:12:34.636 "num_base_bdevs": 4, 00:12:34.636 "num_base_bdevs_discovered": 3, 00:12:34.636 "num_base_bdevs_operational": 3, 00:12:34.636 "base_bdevs_list": [ 00:12:34.636 { 00:12:34.636 "name": null, 00:12:34.636 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.636 "is_configured": false, 00:12:34.636 "data_offset": 0, 00:12:34.636 "data_size": 65536 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "name": "BaseBdev2", 00:12:34.636 "uuid": "5454e4a6-be7e-4f4b-9ff6-1e61b3398476", 00:12:34.636 "is_configured": true, 00:12:34.636 "data_offset": 0, 00:12:34.636 "data_size": 65536 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "name": "BaseBdev3", 00:12:34.636 "uuid": "4f64744e-e616-4fb2-b4f5-c6e31b561bce", 00:12:34.636 "is_configured": true, 00:12:34.636 "data_offset": 0, 00:12:34.636 "data_size": 65536 00:12:34.636 }, 00:12:34.636 { 00:12:34.636 "name": "BaseBdev4", 00:12:34.636 "uuid": "fdad9825-25fe-453a-98a6-24e83549eb2d", 00:12:34.636 "is_configured": true, 00:12:34.636 "data_offset": 0, 00:12:34.636 "data_size": 65536 00:12:34.636 } 00:12:34.636 ] 00:12:34.636 }' 00:12:34.636 12:04:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:34.636 12:04:41 -- common/autotest_common.sh@10 -- # set +x 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.202 12:04:42 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:35.460 [2024-07-25 12:04:42.550800] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.460 12:04:42 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:35.718 [2024-07-25 12:04:42.903652] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:35.718 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:35.718 12:04:42 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:35.718 12:04:42 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.718 12:04:42 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:35.977 [2024-07-25 12:04:43.251934] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:35.977 [2024-07-25 12:04:43.251968] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x18cc5f0 name Existed_Raid, state offline 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.977 12:04:43 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:36.235 12:04:43 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:36.235 12:04:43 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:36.235 12:04:43 -- bdev/bdev_raid.sh@287 -- # killprocess 1244459 00:12:36.235 12:04:43 -- common/autotest_common.sh@926 -- # '[' -z 1244459 ']' 00:12:36.235 12:04:43 -- common/autotest_common.sh@930 -- # kill -0 1244459 00:12:36.235 12:04:43 -- common/autotest_common.sh@931 -- # uname 00:12:36.235 12:04:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.235 12:04:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1244459 00:12:36.235 12:04:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:36.235 12:04:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:36.235 12:04:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1244459' 00:12:36.235 killing process with pid 1244459 00:12:36.235 12:04:43 -- common/autotest_common.sh@945 -- # kill 1244459 00:12:36.235 [2024-07-25 12:04:43.491939] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.235 12:04:43 -- common/autotest_common.sh@950 -- # wait 1244459 00:12:36.235 [2024-07-25 12:04:43.492840] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:36.494 00:12:36.494 real 0m9.930s 00:12:36.494 user 0m17.443s 00:12:36.494 sys 0m2.047s 00:12:36.494 12:04:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.494 12:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.494 ************************************ 00:12:36.494 END TEST raid_state_function_test 00:12:36.494 ************************************ 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:36.494 12:04:43 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:36.494 12:04:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.494 12:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.494 ************************************ 00:12:36.494 START TEST raid_state_function_test_sb 00:12:36.494 ************************************ 00:12:36.494 12:04:43 -- common/autotest_common.sh@1104 -- # raid_state_function_test concat 4 true 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@202 -- # local raid_level=concat 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@212 -- # '[' concat '!=' raid1 ']' 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@213 -- # strip_size=64 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@214 -- # strip_size_create_arg='-z 64' 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@226 -- # raid_pid=1246037 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1246037' 00:12:36.494 Process raid pid: 1246037 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:36.494 12:04:43 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1246037 /var/tmp/spdk-raid.sock 00:12:36.494 12:04:43 -- common/autotest_common.sh@819 -- # '[' -z 1246037 ']' 00:12:36.494 12:04:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:36.494 12:04:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:36.494 12:04:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:36.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:36.494 12:04:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:36.494 12:04:43 -- common/autotest_common.sh@10 -- # set +x 00:12:36.753 [2024-07-25 12:04:43.824196] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:36.753 [2024-07-25 12:04:43.824245] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:36.753 [2024-07-25 12:04:43.912099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:36.753 [2024-07-25 12:04:43.998350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:36.753 [2024-07-25 12:04:44.052855] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:36.753 [2024-07-25 12:04:44.052882] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.320 12:04:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:37.320 12:04:44 -- common/autotest_common.sh@852 -- # return 0 00:12:37.320 12:04:44 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:37.578 [2024-07-25 12:04:44.758482] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:37.578 [2024-07-25 12:04:44.758514] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:37.578 [2024-07-25 12:04:44.758521] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:37.578 [2024-07-25 12:04:44.758529] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:37.578 [2024-07-25 12:04:44.758534] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:37.578 [2024-07-25 12:04:44.758542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:37.578 [2024-07-25 12:04:44.758547] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:37.578 [2024-07-25 12:04:44.758555] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:37.578 12:04:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.837 12:04:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:37.837 "name": "Existed_Raid", 00:12:37.837 "uuid": "bbbf7e79-daa7-4645-9b36-d7993149912d", 00:12:37.837 "strip_size_kb": 64, 00:12:37.837 "state": "configuring", 00:12:37.837 "raid_level": "concat", 00:12:37.837 "superblock": true, 00:12:37.837 "num_base_bdevs": 4, 00:12:37.837 "num_base_bdevs_discovered": 0, 00:12:37.837 "num_base_bdevs_operational": 4, 00:12:37.837 "base_bdevs_list": [ 00:12:37.837 { 00:12:37.837 "name": "BaseBdev1", 00:12:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.837 "is_configured": false, 00:12:37.837 "data_offset": 0, 00:12:37.837 "data_size": 0 00:12:37.837 }, 00:12:37.837 { 00:12:37.837 "name": "BaseBdev2", 00:12:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.837 "is_configured": false, 00:12:37.837 "data_offset": 0, 00:12:37.837 "data_size": 0 00:12:37.837 }, 00:12:37.837 { 00:12:37.837 "name": "BaseBdev3", 00:12:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.837 "is_configured": false, 00:12:37.837 "data_offset": 0, 00:12:37.837 "data_size": 0 00:12:37.837 }, 00:12:37.837 { 00:12:37.837 "name": "BaseBdev4", 00:12:37.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.837 "is_configured": false, 00:12:37.837 "data_offset": 0, 00:12:37.837 "data_size": 0 00:12:37.837 } 00:12:37.837 ] 00:12:37.837 }' 00:12:37.837 12:04:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:37.837 12:04:44 -- common/autotest_common.sh@10 -- # set +x 00:12:38.403 12:04:45 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:38.403 [2024-07-25 12:04:45.584501] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.403 [2024-07-25 12:04:45.584523] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fedd80 name Existed_Raid, state configuring 00:12:38.403 12:04:45 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:38.662 [2024-07-25 12:04:45.752960] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.662 [2024-07-25 12:04:45.752978] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.662 [2024-07-25 12:04:45.752984] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.662 [2024-07-25 12:04:45.752991] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.662 [2024-07-25 12:04:45.753012] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.662 [2024-07-25 12:04:45.753019] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.662 [2024-07-25 12:04:45.753025] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:38.662 [2024-07-25 12:04:45.753032] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:38.662 12:04:45 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.662 [2024-07-25 12:04:45.929841] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.662 BaseBdev1 00:12:38.662 12:04:45 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:38.662 12:04:45 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:38.662 12:04:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:38.662 12:04:45 -- common/autotest_common.sh@889 -- # local i 00:12:38.662 12:04:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:38.662 12:04:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:38.662 12:04:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:38.920 12:04:46 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:39.178 [ 00:12:39.178 { 00:12:39.178 "name": "BaseBdev1", 00:12:39.178 "aliases": [ 00:12:39.178 "d2b29b65-9cae-4bf1-91a1-60b8259e9c6f" 00:12:39.178 ], 00:12:39.178 "product_name": "Malloc disk", 00:12:39.178 "block_size": 512, 00:12:39.178 "num_blocks": 65536, 00:12:39.178 "uuid": "d2b29b65-9cae-4bf1-91a1-60b8259e9c6f", 00:12:39.178 "assigned_rate_limits": { 00:12:39.178 "rw_ios_per_sec": 0, 00:12:39.178 "rw_mbytes_per_sec": 0, 00:12:39.178 "r_mbytes_per_sec": 0, 00:12:39.179 "w_mbytes_per_sec": 0 00:12:39.179 }, 00:12:39.179 "claimed": true, 00:12:39.179 "claim_type": "exclusive_write", 00:12:39.179 "zoned": false, 00:12:39.179 "supported_io_types": { 00:12:39.179 "read": true, 00:12:39.179 "write": true, 00:12:39.179 "unmap": true, 00:12:39.179 "write_zeroes": true, 00:12:39.179 "flush": true, 00:12:39.179 "reset": true, 00:12:39.179 "compare": false, 00:12:39.179 "compare_and_write": false, 00:12:39.179 "abort": true, 00:12:39.179 "nvme_admin": false, 00:12:39.179 "nvme_io": false 00:12:39.179 }, 00:12:39.179 "memory_domains": [ 00:12:39.179 { 00:12:39.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.179 "dma_device_type": 2 00:12:39.179 } 00:12:39.179 ], 00:12:39.179 "driver_specific": {} 00:12:39.179 } 00:12:39.179 ] 00:12:39.179 12:04:46 -- common/autotest_common.sh@895 -- # return 0 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:39.179 "name": "Existed_Raid", 00:12:39.179 "uuid": "d9e7dbb6-afa5-4286-8823-bb714c980b93", 00:12:39.179 "strip_size_kb": 64, 00:12:39.179 "state": "configuring", 00:12:39.179 "raid_level": "concat", 00:12:39.179 "superblock": true, 00:12:39.179 "num_base_bdevs": 4, 00:12:39.179 "num_base_bdevs_discovered": 1, 00:12:39.179 "num_base_bdevs_operational": 4, 00:12:39.179 "base_bdevs_list": [ 00:12:39.179 { 00:12:39.179 "name": "BaseBdev1", 00:12:39.179 "uuid": "d2b29b65-9cae-4bf1-91a1-60b8259e9c6f", 00:12:39.179 "is_configured": true, 00:12:39.179 "data_offset": 2048, 00:12:39.179 "data_size": 63488 00:12:39.179 }, 00:12:39.179 { 00:12:39.179 "name": "BaseBdev2", 00:12:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.179 "is_configured": false, 00:12:39.179 "data_offset": 0, 00:12:39.179 "data_size": 0 00:12:39.179 }, 00:12:39.179 { 00:12:39.179 "name": "BaseBdev3", 00:12:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.179 "is_configured": false, 00:12:39.179 "data_offset": 0, 00:12:39.179 "data_size": 0 00:12:39.179 }, 00:12:39.179 { 00:12:39.179 "name": "BaseBdev4", 00:12:39.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.179 "is_configured": false, 00:12:39.179 "data_offset": 0, 00:12:39.179 "data_size": 0 00:12:39.179 } 00:12:39.179 ] 00:12:39.179 }' 00:12:39.179 12:04:46 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:39.179 12:04:46 -- common/autotest_common.sh@10 -- # set +x 00:12:39.747 12:04:46 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:39.747 [2024-07-25 12:04:47.040714] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.747 [2024-07-25 12:04:47.040749] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1fee000 name Existed_Raid, state configuring 00:12:39.747 12:04:47 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:12:39.747 12:04:47 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:40.006 12:04:47 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.265 BaseBdev1 00:12:40.265 12:04:47 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:12:40.265 12:04:47 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:40.265 12:04:47 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:40.265 12:04:47 -- common/autotest_common.sh@889 -- # local i 00:12:40.265 12:04:47 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:40.265 12:04:47 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:40.265 12:04:47 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:40.523 12:04:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.523 [ 00:12:40.523 { 00:12:40.523 "name": "BaseBdev1", 00:12:40.523 "aliases": [ 00:12:40.523 "ffcf9875-9f84-4bba-8810-b80740cdcbc3" 00:12:40.523 ], 00:12:40.523 "product_name": "Malloc disk", 00:12:40.523 "block_size": 512, 00:12:40.523 "num_blocks": 65536, 00:12:40.523 "uuid": "ffcf9875-9f84-4bba-8810-b80740cdcbc3", 00:12:40.523 "assigned_rate_limits": { 00:12:40.523 "rw_ios_per_sec": 0, 00:12:40.523 "rw_mbytes_per_sec": 0, 00:12:40.523 "r_mbytes_per_sec": 0, 00:12:40.523 "w_mbytes_per_sec": 0 00:12:40.523 }, 00:12:40.523 "claimed": false, 00:12:40.523 "zoned": false, 00:12:40.523 "supported_io_types": { 00:12:40.523 "read": true, 00:12:40.523 "write": true, 00:12:40.523 "unmap": true, 00:12:40.523 "write_zeroes": true, 00:12:40.523 "flush": true, 00:12:40.523 "reset": true, 00:12:40.523 "compare": false, 00:12:40.523 "compare_and_write": false, 00:12:40.523 "abort": true, 00:12:40.523 "nvme_admin": false, 00:12:40.523 "nvme_io": false 00:12:40.523 }, 00:12:40.523 "memory_domains": [ 00:12:40.523 { 00:12:40.523 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.523 "dma_device_type": 2 00:12:40.523 } 00:12:40.523 ], 00:12:40.523 "driver_specific": {} 00:12:40.523 } 00:12:40.523 ] 00:12:40.523 12:04:47 -- common/autotest_common.sh@895 -- # return 0 00:12:40.523 12:04:47 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:40.782 [2024-07-25 12:04:47.883738] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.782 [2024-07-25 12:04:47.884793] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:40.782 [2024-07-25 12:04:47.884817] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:40.782 [2024-07-25 12:04:47.884823] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:40.782 [2024-07-25 12:04:47.884831] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:40.782 [2024-07-25 12:04:47.884836] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:40.782 [2024-07-25 12:04:47.884842] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.782 12:04:47 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.782 12:04:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:40.782 "name": "Existed_Raid", 00:12:40.782 "uuid": "e2bc332c-c95a-4a6c-8977-c0ec5d6ba904", 00:12:40.782 "strip_size_kb": 64, 00:12:40.782 "state": "configuring", 00:12:40.782 "raid_level": "concat", 00:12:40.782 "superblock": true, 00:12:40.782 "num_base_bdevs": 4, 00:12:40.782 "num_base_bdevs_discovered": 1, 00:12:40.782 "num_base_bdevs_operational": 4, 00:12:40.782 "base_bdevs_list": [ 00:12:40.782 { 00:12:40.782 "name": "BaseBdev1", 00:12:40.782 "uuid": "ffcf9875-9f84-4bba-8810-b80740cdcbc3", 00:12:40.782 "is_configured": true, 00:12:40.782 "data_offset": 2048, 00:12:40.782 "data_size": 63488 00:12:40.782 }, 00:12:40.782 { 00:12:40.782 "name": "BaseBdev2", 00:12:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.782 "is_configured": false, 00:12:40.782 "data_offset": 0, 00:12:40.782 "data_size": 0 00:12:40.782 }, 00:12:40.782 { 00:12:40.782 "name": "BaseBdev3", 00:12:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.782 "is_configured": false, 00:12:40.782 "data_offset": 0, 00:12:40.782 "data_size": 0 00:12:40.782 }, 00:12:40.782 { 00:12:40.782 "name": "BaseBdev4", 00:12:40.782 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.782 "is_configured": false, 00:12:40.782 "data_offset": 0, 00:12:40.782 "data_size": 0 00:12:40.782 } 00:12:40.782 ] 00:12:40.782 }' 00:12:40.782 12:04:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:40.782 12:04:48 -- common/autotest_common.sh@10 -- # set +x 00:12:41.353 12:04:48 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:41.612 [2024-07-25 12:04:48.720595] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:41.612 BaseBdev2 00:12:41.612 12:04:48 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:41.612 12:04:48 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:41.612 12:04:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:41.612 12:04:48 -- common/autotest_common.sh@889 -- # local i 00:12:41.612 12:04:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:41.612 12:04:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:41.612 12:04:48 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:41.612 12:04:48 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:41.870 [ 00:12:41.870 { 00:12:41.870 "name": "BaseBdev2", 00:12:41.870 "aliases": [ 00:12:41.870 "0313e828-ffda-400b-9e7e-7799293e294b" 00:12:41.870 ], 00:12:41.870 "product_name": "Malloc disk", 00:12:41.870 "block_size": 512, 00:12:41.870 "num_blocks": 65536, 00:12:41.870 "uuid": "0313e828-ffda-400b-9e7e-7799293e294b", 00:12:41.870 "assigned_rate_limits": { 00:12:41.870 "rw_ios_per_sec": 0, 00:12:41.870 "rw_mbytes_per_sec": 0, 00:12:41.870 "r_mbytes_per_sec": 0, 00:12:41.870 "w_mbytes_per_sec": 0 00:12:41.870 }, 00:12:41.870 "claimed": true, 00:12:41.870 "claim_type": "exclusive_write", 00:12:41.870 "zoned": false, 00:12:41.870 "supported_io_types": { 00:12:41.870 "read": true, 00:12:41.870 "write": true, 00:12:41.870 "unmap": true, 00:12:41.870 "write_zeroes": true, 00:12:41.870 "flush": true, 00:12:41.870 "reset": true, 00:12:41.870 "compare": false, 00:12:41.870 "compare_and_write": false, 00:12:41.870 "abort": true, 00:12:41.870 "nvme_admin": false, 00:12:41.870 "nvme_io": false 00:12:41.870 }, 00:12:41.870 "memory_domains": [ 00:12:41.870 { 00:12:41.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.870 "dma_device_type": 2 00:12:41.870 } 00:12:41.870 ], 00:12:41.870 "driver_specific": {} 00:12:41.870 } 00:12:41.870 ] 00:12:41.870 12:04:49 -- common/autotest_common.sh@895 -- # return 0 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.870 12:04:49 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.128 12:04:49 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:42.128 "name": "Existed_Raid", 00:12:42.128 "uuid": "e2bc332c-c95a-4a6c-8977-c0ec5d6ba904", 00:12:42.128 "strip_size_kb": 64, 00:12:42.128 "state": "configuring", 00:12:42.128 "raid_level": "concat", 00:12:42.128 "superblock": true, 00:12:42.128 "num_base_bdevs": 4, 00:12:42.128 "num_base_bdevs_discovered": 2, 00:12:42.128 "num_base_bdevs_operational": 4, 00:12:42.128 "base_bdevs_list": [ 00:12:42.128 { 00:12:42.128 "name": "BaseBdev1", 00:12:42.128 "uuid": "ffcf9875-9f84-4bba-8810-b80740cdcbc3", 00:12:42.128 "is_configured": true, 00:12:42.128 "data_offset": 2048, 00:12:42.128 "data_size": 63488 00:12:42.128 }, 00:12:42.128 { 00:12:42.128 "name": "BaseBdev2", 00:12:42.128 "uuid": "0313e828-ffda-400b-9e7e-7799293e294b", 00:12:42.128 "is_configured": true, 00:12:42.128 "data_offset": 2048, 00:12:42.128 "data_size": 63488 00:12:42.128 }, 00:12:42.128 { 00:12:42.128 "name": "BaseBdev3", 00:12:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.128 "is_configured": false, 00:12:42.128 "data_offset": 0, 00:12:42.128 "data_size": 0 00:12:42.128 }, 00:12:42.128 { 00:12:42.128 "name": "BaseBdev4", 00:12:42.128 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:42.128 "is_configured": false, 00:12:42.128 "data_offset": 0, 00:12:42.128 "data_size": 0 00:12:42.128 } 00:12:42.128 ] 00:12:42.128 }' 00:12:42.128 12:04:49 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:42.128 12:04:49 -- common/autotest_common.sh@10 -- # set +x 00:12:42.695 12:04:49 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:42.695 [2024-07-25 12:04:49.898439] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:42.695 BaseBdev3 00:12:42.695 12:04:49 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:12:42.695 12:04:49 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:12:42.695 12:04:49 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:42.695 12:04:49 -- common/autotest_common.sh@889 -- # local i 00:12:42.695 12:04:49 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:42.695 12:04:49 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:42.695 12:04:49 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:42.952 12:04:50 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:42.952 [ 00:12:42.952 { 00:12:42.952 "name": "BaseBdev3", 00:12:42.952 "aliases": [ 00:12:42.952 "c9e0f866-42d6-4bd4-8e01-fa100812df2b" 00:12:42.952 ], 00:12:42.952 "product_name": "Malloc disk", 00:12:42.952 "block_size": 512, 00:12:42.952 "num_blocks": 65536, 00:12:42.952 "uuid": "c9e0f866-42d6-4bd4-8e01-fa100812df2b", 00:12:42.952 "assigned_rate_limits": { 00:12:42.952 "rw_ios_per_sec": 0, 00:12:42.952 "rw_mbytes_per_sec": 0, 00:12:42.952 "r_mbytes_per_sec": 0, 00:12:42.952 "w_mbytes_per_sec": 0 00:12:42.952 }, 00:12:42.952 "claimed": true, 00:12:42.952 "claim_type": "exclusive_write", 00:12:42.952 "zoned": false, 00:12:42.952 "supported_io_types": { 00:12:42.952 "read": true, 00:12:42.952 "write": true, 00:12:42.952 "unmap": true, 00:12:42.952 "write_zeroes": true, 00:12:42.952 "flush": true, 00:12:42.952 "reset": true, 00:12:42.952 "compare": false, 00:12:42.952 "compare_and_write": false, 00:12:42.952 "abort": true, 00:12:42.952 "nvme_admin": false, 00:12:42.952 "nvme_io": false 00:12:42.952 }, 00:12:42.952 "memory_domains": [ 00:12:42.952 { 00:12:42.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:42.952 "dma_device_type": 2 00:12:42.952 } 00:12:42.952 ], 00:12:42.952 "driver_specific": {} 00:12:42.952 } 00:12:42.952 ] 00:12:42.952 12:04:50 -- common/autotest_common.sh@895 -- # return 0 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:42.952 12:04:50 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:43.210 12:04:50 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:43.210 12:04:50 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.210 12:04:50 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:43.210 "name": "Existed_Raid", 00:12:43.210 "uuid": "e2bc332c-c95a-4a6c-8977-c0ec5d6ba904", 00:12:43.210 "strip_size_kb": 64, 00:12:43.210 "state": "configuring", 00:12:43.210 "raid_level": "concat", 00:12:43.210 "superblock": true, 00:12:43.210 "num_base_bdevs": 4, 00:12:43.210 "num_base_bdevs_discovered": 3, 00:12:43.210 "num_base_bdevs_operational": 4, 00:12:43.210 "base_bdevs_list": [ 00:12:43.210 { 00:12:43.210 "name": "BaseBdev1", 00:12:43.210 "uuid": "ffcf9875-9f84-4bba-8810-b80740cdcbc3", 00:12:43.210 "is_configured": true, 00:12:43.210 "data_offset": 2048, 00:12:43.210 "data_size": 63488 00:12:43.210 }, 00:12:43.210 { 00:12:43.210 "name": "BaseBdev2", 00:12:43.210 "uuid": "0313e828-ffda-400b-9e7e-7799293e294b", 00:12:43.210 "is_configured": true, 00:12:43.210 "data_offset": 2048, 00:12:43.210 "data_size": 63488 00:12:43.210 }, 00:12:43.210 { 00:12:43.210 "name": "BaseBdev3", 00:12:43.210 "uuid": "c9e0f866-42d6-4bd4-8e01-fa100812df2b", 00:12:43.210 "is_configured": true, 00:12:43.210 "data_offset": 2048, 00:12:43.210 "data_size": 63488 00:12:43.210 }, 00:12:43.210 { 00:12:43.210 "name": "BaseBdev4", 00:12:43.210 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.211 "is_configured": false, 00:12:43.211 "data_offset": 0, 00:12:43.211 "data_size": 0 00:12:43.211 } 00:12:43.211 ] 00:12:43.211 }' 00:12:43.211 12:04:50 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:43.211 12:04:50 -- common/autotest_common.sh@10 -- # set +x 00:12:43.777 12:04:50 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:43.777 [2024-07-25 12:04:51.080444] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:43.777 [2024-07-25 12:04:51.080577] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x218e130 00:12:43.777 [2024-07-25 12:04:51.080586] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:43.777 [2024-07-25 12:04:51.080705] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1fe69e0 00:12:43.777 [2024-07-25 12:04:51.080780] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x218e130 00:12:43.777 [2024-07-25 12:04:51.080786] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x218e130 00:12:43.777 [2024-07-25 12:04:51.080843] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.777 BaseBdev4 00:12:44.035 12:04:51 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:12:44.035 12:04:51 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:12:44.035 12:04:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:44.035 12:04:51 -- common/autotest_common.sh@889 -- # local i 00:12:44.035 12:04:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:44.035 12:04:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:44.035 12:04:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:44.035 12:04:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:44.293 [ 00:12:44.293 { 00:12:44.293 "name": "BaseBdev4", 00:12:44.293 "aliases": [ 00:12:44.293 "cd01dc37-7739-4e13-af27-8a1fb55c2a42" 00:12:44.293 ], 00:12:44.293 "product_name": "Malloc disk", 00:12:44.293 "block_size": 512, 00:12:44.293 "num_blocks": 65536, 00:12:44.293 "uuid": "cd01dc37-7739-4e13-af27-8a1fb55c2a42", 00:12:44.293 "assigned_rate_limits": { 00:12:44.293 "rw_ios_per_sec": 0, 00:12:44.293 "rw_mbytes_per_sec": 0, 00:12:44.293 "r_mbytes_per_sec": 0, 00:12:44.293 "w_mbytes_per_sec": 0 00:12:44.293 }, 00:12:44.293 "claimed": true, 00:12:44.293 "claim_type": "exclusive_write", 00:12:44.293 "zoned": false, 00:12:44.293 "supported_io_types": { 00:12:44.293 "read": true, 00:12:44.293 "write": true, 00:12:44.293 "unmap": true, 00:12:44.293 "write_zeroes": true, 00:12:44.293 "flush": true, 00:12:44.293 "reset": true, 00:12:44.293 "compare": false, 00:12:44.293 "compare_and_write": false, 00:12:44.293 "abort": true, 00:12:44.293 "nvme_admin": false, 00:12:44.293 "nvme_io": false 00:12:44.293 }, 00:12:44.293 "memory_domains": [ 00:12:44.293 { 00:12:44.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.293 "dma_device_type": 2 00:12:44.293 } 00:12:44.293 ], 00:12:44.293 "driver_specific": {} 00:12:44.293 } 00:12:44.293 ] 00:12:44.293 12:04:51 -- common/autotest_common.sh@895 -- # return 0 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.293 12:04:51 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:44.293 "name": "Existed_Raid", 00:12:44.293 "uuid": "e2bc332c-c95a-4a6c-8977-c0ec5d6ba904", 00:12:44.293 "strip_size_kb": 64, 00:12:44.293 "state": "online", 00:12:44.293 "raid_level": "concat", 00:12:44.293 "superblock": true, 00:12:44.293 "num_base_bdevs": 4, 00:12:44.293 "num_base_bdevs_discovered": 4, 00:12:44.293 "num_base_bdevs_operational": 4, 00:12:44.293 "base_bdevs_list": [ 00:12:44.293 { 00:12:44.293 "name": "BaseBdev1", 00:12:44.293 "uuid": "ffcf9875-9f84-4bba-8810-b80740cdcbc3", 00:12:44.293 "is_configured": true, 00:12:44.293 "data_offset": 2048, 00:12:44.293 "data_size": 63488 00:12:44.293 }, 00:12:44.293 { 00:12:44.293 "name": "BaseBdev2", 00:12:44.293 "uuid": "0313e828-ffda-400b-9e7e-7799293e294b", 00:12:44.293 "is_configured": true, 00:12:44.293 "data_offset": 2048, 00:12:44.293 "data_size": 63488 00:12:44.294 }, 00:12:44.294 { 00:12:44.294 "name": "BaseBdev3", 00:12:44.294 "uuid": "c9e0f866-42d6-4bd4-8e01-fa100812df2b", 00:12:44.294 "is_configured": true, 00:12:44.294 "data_offset": 2048, 00:12:44.294 "data_size": 63488 00:12:44.294 }, 00:12:44.294 { 00:12:44.294 "name": "BaseBdev4", 00:12:44.294 "uuid": "cd01dc37-7739-4e13-af27-8a1fb55c2a42", 00:12:44.294 "is_configured": true, 00:12:44.294 "data_offset": 2048, 00:12:44.294 "data_size": 63488 00:12:44.294 } 00:12:44.294 ] 00:12:44.294 }' 00:12:44.294 12:04:51 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:44.294 12:04:51 -- common/autotest_common.sh@10 -- # set +x 00:12:44.859 12:04:52 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:44.859 [2024-07-25 12:04:52.151252] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:44.859 [2024-07-25 12:04:52.151278] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.859 [2024-07-25 12:04:52.151307] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.859 12:04:52 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@264 -- # has_redundancy concat 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@265 -- # expected_state=offline 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@118 -- # local expected_state=offline 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:45.117 "name": "Existed_Raid", 00:12:45.117 "uuid": "e2bc332c-c95a-4a6c-8977-c0ec5d6ba904", 00:12:45.117 "strip_size_kb": 64, 00:12:45.117 "state": "offline", 00:12:45.117 "raid_level": "concat", 00:12:45.117 "superblock": true, 00:12:45.117 "num_base_bdevs": 4, 00:12:45.117 "num_base_bdevs_discovered": 3, 00:12:45.117 "num_base_bdevs_operational": 3, 00:12:45.117 "base_bdevs_list": [ 00:12:45.117 { 00:12:45.117 "name": null, 00:12:45.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.117 "is_configured": false, 00:12:45.117 "data_offset": 2048, 00:12:45.117 "data_size": 63488 00:12:45.117 }, 00:12:45.117 { 00:12:45.117 "name": "BaseBdev2", 00:12:45.117 "uuid": "0313e828-ffda-400b-9e7e-7799293e294b", 00:12:45.117 "is_configured": true, 00:12:45.117 "data_offset": 2048, 00:12:45.117 "data_size": 63488 00:12:45.117 }, 00:12:45.117 { 00:12:45.117 "name": "BaseBdev3", 00:12:45.117 "uuid": "c9e0f866-42d6-4bd4-8e01-fa100812df2b", 00:12:45.117 "is_configured": true, 00:12:45.117 "data_offset": 2048, 00:12:45.117 "data_size": 63488 00:12:45.117 }, 00:12:45.117 { 00:12:45.117 "name": "BaseBdev4", 00:12:45.117 "uuid": "cd01dc37-7739-4e13-af27-8a1fb55c2a42", 00:12:45.117 "is_configured": true, 00:12:45.117 "data_offset": 2048, 00:12:45.117 "data_size": 63488 00:12:45.117 } 00:12:45.117 ] 00:12:45.117 }' 00:12:45.117 12:04:52 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:45.117 12:04:52 -- common/autotest_common.sh@10 -- # set +x 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:45.683 12:04:52 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:45.942 [2024-07-25 12:04:53.115401] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:45.942 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:45.942 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:45.942 12:04:53 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:45.942 12:04:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:46.214 [2024-07-25 12:04:53.471915] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:12:46.214 12:04:53 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.509 12:04:53 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:12:46.509 12:04:53 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:46.509 12:04:53 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:12:46.767 [2024-07-25 12:04:53.836264] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:46.767 [2024-07-25 12:04:53.836301] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x218e130 name Existed_Raid, state offline 00:12:46.767 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:12:46.767 12:04:53 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:12:46.767 12:04:53 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.767 12:04:53 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:12:46.767 12:04:54 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:12:46.767 12:04:54 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:12:46.767 12:04:54 -- bdev/bdev_raid.sh@287 -- # killprocess 1246037 00:12:46.767 12:04:54 -- common/autotest_common.sh@926 -- # '[' -z 1246037 ']' 00:12:46.767 12:04:54 -- common/autotest_common.sh@930 -- # kill -0 1246037 00:12:46.767 12:04:54 -- common/autotest_common.sh@931 -- # uname 00:12:46.767 12:04:54 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:46.767 12:04:54 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1246037 00:12:46.767 12:04:54 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:46.767 12:04:54 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:46.767 12:04:54 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1246037' 00:12:46.767 killing process with pid 1246037 00:12:46.767 12:04:54 -- common/autotest_common.sh@945 -- # kill 1246037 00:12:46.767 [2024-07-25 12:04:54.064881] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:46.767 12:04:54 -- common/autotest_common.sh@950 -- # wait 1246037 00:12:46.767 [2024-07-25 12:04:54.065672] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@289 -- # return 0 00:12:47.026 00:12:47.026 real 0m10.509s 00:12:47.026 user 0m18.500s 00:12:47.026 sys 0m2.101s 00:12:47.026 12:04:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.026 12:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:47.026 ************************************ 00:12:47.026 END TEST raid_state_function_test_sb 00:12:47.026 ************************************ 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:47.026 12:04:54 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:12:47.026 12:04:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:47.026 12:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:47.026 ************************************ 00:12:47.026 START TEST raid_superblock_test 00:12:47.026 ************************************ 00:12:47.026 12:04:54 -- common/autotest_common.sh@1104 -- # raid_superblock_test concat 4 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@338 -- # local raid_level=concat 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@349 -- # '[' concat '!=' raid1 ']' 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@350 -- # strip_size=64 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@351 -- # strip_size_create_arg='-z 64' 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@357 -- # raid_pid=1247685 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:47.026 12:04:54 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1247685 /var/tmp/spdk-raid.sock 00:12:47.026 12:04:54 -- common/autotest_common.sh@819 -- # '[' -z 1247685 ']' 00:12:47.026 12:04:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:47.026 12:04:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:47.026 12:04:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:47.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:47.026 12:04:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:47.026 12:04:54 -- common/autotest_common.sh@10 -- # set +x 00:12:47.284 [2024-07-25 12:04:54.376499] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:47.284 [2024-07-25 12:04:54.376560] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1247685 ] 00:12:47.284 [2024-07-25 12:04:54.463724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.284 [2024-07-25 12:04:54.547666] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:47.542 [2024-07-25 12:04:54.608879] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:47.542 [2024-07-25 12:04:54.608911] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:48.108 12:04:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:48.108 12:04:55 -- common/autotest_common.sh@852 -- # return 0 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:48.108 12:04:55 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:48.108 malloc1 00:12:48.109 12:04:55 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:48.366 [2024-07-25 12:04:55.477733] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:48.366 [2024-07-25 12:04:55.477779] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.366 [2024-07-25 12:04:55.477795] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10ad8d0 00:12:48.366 [2024-07-25 12:04:55.477803] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.366 [2024-07-25 12:04:55.478947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.366 [2024-07-25 12:04:55.478971] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:48.366 pt1 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:48.366 malloc2 00:12:48.366 12:04:55 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:48.625 [2024-07-25 12:04:55.810401] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:48.625 [2024-07-25 12:04:55.810448] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.625 [2024-07-25 12:04:55.810463] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12551a0 00:12:48.625 [2024-07-25 12:04:55.810471] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.625 [2024-07-25 12:04:55.811451] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.625 [2024-07-25 12:04:55.811473] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:48.625 pt2 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:48.625 12:04:55 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:48.883 malloc3 00:12:48.883 12:04:56 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:48.883 [2024-07-25 12:04:56.171141] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:48.883 [2024-07-25 12:04:56.171175] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.883 [2024-07-25 12:04:56.171190] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1255700 00:12:48.883 [2024-07-25 12:04:56.171199] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.883 [2024-07-25 12:04:56.172178] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.883 [2024-07-25 12:04:56.172200] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:48.883 pt3 00:12:48.883 12:04:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:48.883 12:04:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:12:49.141 malloc4 00:12:49.141 12:04:56 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:49.399 [2024-07-25 12:04:56.511612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:49.399 [2024-07-25 12:04:56.511648] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.399 [2024-07-25 12:04:56.511664] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1257e00 00:12:49.399 [2024-07-25 12:04:56.511672] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.399 [2024-07-25 12:04:56.512632] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.399 [2024-07-25 12:04:56.512652] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:49.399 pt4 00:12:49.399 12:04:56 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:12:49.399 12:04:56 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:12:49.399 12:04:56 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:12:49.399 [2024-07-25 12:04:56.684086] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:49.399 [2024-07-25 12:04:56.684915] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.399 [2024-07-25 12:04:56.684951] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:49.399 [2024-07-25 12:04:56.684978] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:49.399 [2024-07-25 12:04:56.685097] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x12539f0 00:12:49.399 [2024-07-25 12:04:56.685104] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:49.399 [2024-07-25 12:04:56.685228] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x125a1e0 00:12:49.399 [2024-07-25 12:04:56.685325] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12539f0 00:12:49.399 [2024-07-25 12:04:56.685331] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12539f0 00:12:49.400 [2024-07-25 12:04:56.685391] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:49.400 12:04:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:49.658 12:04:56 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:49.658 12:04:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.658 12:04:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:49.658 "name": "raid_bdev1", 00:12:49.658 "uuid": "28a62dab-1df9-4173-b64f-20fabdee404a", 00:12:49.658 "strip_size_kb": 64, 00:12:49.658 "state": "online", 00:12:49.658 "raid_level": "concat", 00:12:49.658 "superblock": true, 00:12:49.658 "num_base_bdevs": 4, 00:12:49.658 "num_base_bdevs_discovered": 4, 00:12:49.658 "num_base_bdevs_operational": 4, 00:12:49.658 "base_bdevs_list": [ 00:12:49.658 { 00:12:49.658 "name": "pt1", 00:12:49.658 "uuid": "63308d23-507d-529d-9109-ee4a9a7bca36", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 2048, 00:12:49.658 "data_size": 63488 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "pt2", 00:12:49.658 "uuid": "2fa8a741-d86c-580f-98a5-f08280913dd1", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 2048, 00:12:49.658 "data_size": 63488 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "pt3", 00:12:49.658 "uuid": "fd944bf0-39ea-5ea1-9d90-48bc46226eda", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 2048, 00:12:49.658 "data_size": 63488 00:12:49.658 }, 00:12:49.658 { 00:12:49.658 "name": "pt4", 00:12:49.658 "uuid": "e8b1f0bc-e58e-5948-999f-beb57d45717a", 00:12:49.658 "is_configured": true, 00:12:49.658 "data_offset": 2048, 00:12:49.658 "data_size": 63488 00:12:49.658 } 00:12:49.658 ] 00:12:49.658 }' 00:12:49.658 12:04:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:49.658 12:04:56 -- common/autotest_common.sh@10 -- # set +x 00:12:50.224 12:04:57 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:50.224 12:04:57 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:12:50.224 [2024-07-25 12:04:57.522392] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.482 12:04:57 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=28a62dab-1df9-4173-b64f-20fabdee404a 00:12:50.482 12:04:57 -- bdev/bdev_raid.sh@380 -- # '[' -z 28a62dab-1df9-4173-b64f-20fabdee404a ']' 00:12:50.482 12:04:57 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:50.482 [2024-07-25 12:04:57.686617] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.482 [2024-07-25 12:04:57.686633] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.482 [2024-07-25 12:04:57.686671] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.482 [2024-07-25 12:04:57.686713] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.482 [2024-07-25 12:04:57.686721] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12539f0 name raid_bdev1, state offline 00:12:50.482 12:04:57 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.482 12:04:57 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:12:50.740 12:04:57 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:12:50.740 12:04:57 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:12:50.740 12:04:57 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.740 12:04:57 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:50.740 12:04:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.740 12:04:58 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:50.997 12:04:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:50.997 12:04:58 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:51.255 12:04:58 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:12:51.255 12:04:58 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:12:51.255 12:04:58 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:51.255 12:04:58 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:51.513 12:04:58 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:12:51.513 12:04:58 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:51.513 12:04:58 -- common/autotest_common.sh@640 -- # local es=0 00:12:51.513 12:04:58 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:51.513 12:04:58 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:51.513 12:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.513 12:04:58 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:51.513 12:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.513 12:04:58 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:51.513 12:04:58 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:51.513 12:04:58 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:12:51.513 12:04:58 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:12:51.513 12:04:58 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:12:51.771 [2024-07-25 12:04:58.865641] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:51.771 [2024-07-25 12:04:58.866689] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:51.771 [2024-07-25 12:04:58.866721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:51.771 [2024-07-25 12:04:58.866744] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:51.771 [2024-07-25 12:04:58.866778] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:12:51.771 [2024-07-25 12:04:58.866807] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:12:51.771 [2024-07-25 12:04:58.866821] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:12:51.771 [2024-07-25 12:04:58.866835] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:12:51.771 [2024-07-25 12:04:58.866847] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:51.771 [2024-07-25 12:04:58.866854] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125a180 name raid_bdev1, state configuring 00:12:51.771 request: 00:12:51.771 { 00:12:51.771 "name": "raid_bdev1", 00:12:51.771 "raid_level": "concat", 00:12:51.771 "base_bdevs": [ 00:12:51.771 "malloc1", 00:12:51.771 "malloc2", 00:12:51.771 "malloc3", 00:12:51.771 "malloc4" 00:12:51.771 ], 00:12:51.771 "superblock": false, 00:12:51.771 "strip_size_kb": 64, 00:12:51.771 "method": "bdev_raid_create", 00:12:51.771 "req_id": 1 00:12:51.771 } 00:12:51.771 Got JSON-RPC error response 00:12:51.771 response: 00:12:51.771 { 00:12:51.771 "code": -17, 00:12:51.771 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:51.771 } 00:12:51.771 12:04:58 -- common/autotest_common.sh@643 -- # es=1 00:12:51.771 12:04:58 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:51.771 12:04:58 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:51.771 12:04:58 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:51.771 12:04:58 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:51.771 12:04:58 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:12:51.771 12:04:59 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:12:51.771 12:04:59 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:12:51.771 12:04:59 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:52.029 [2024-07-25 12:04:59.210494] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:52.029 [2024-07-25 12:04:59.210529] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.029 [2024-07-25 12:04:59.210545] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10adb00 00:12:52.029 [2024-07-25 12:04:59.210554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.029 [2024-07-25 12:04:59.211849] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.029 [2024-07-25 12:04:59.211875] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:52.029 [2024-07-25 12:04:59.211930] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:12:52.029 [2024-07-25 12:04:59.211950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:52.029 pt1 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.029 12:04:59 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.287 12:04:59 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:52.287 "name": "raid_bdev1", 00:12:52.287 "uuid": "28a62dab-1df9-4173-b64f-20fabdee404a", 00:12:52.287 "strip_size_kb": 64, 00:12:52.287 "state": "configuring", 00:12:52.287 "raid_level": "concat", 00:12:52.287 "superblock": true, 00:12:52.287 "num_base_bdevs": 4, 00:12:52.287 "num_base_bdevs_discovered": 1, 00:12:52.287 "num_base_bdevs_operational": 4, 00:12:52.287 "base_bdevs_list": [ 00:12:52.287 { 00:12:52.287 "name": "pt1", 00:12:52.287 "uuid": "63308d23-507d-529d-9109-ee4a9a7bca36", 00:12:52.287 "is_configured": true, 00:12:52.287 "data_offset": 2048, 00:12:52.287 "data_size": 63488 00:12:52.287 }, 00:12:52.287 { 00:12:52.287 "name": null, 00:12:52.287 "uuid": "2fa8a741-d86c-580f-98a5-f08280913dd1", 00:12:52.287 "is_configured": false, 00:12:52.287 "data_offset": 2048, 00:12:52.287 "data_size": 63488 00:12:52.287 }, 00:12:52.287 { 00:12:52.287 "name": null, 00:12:52.287 "uuid": "fd944bf0-39ea-5ea1-9d90-48bc46226eda", 00:12:52.287 "is_configured": false, 00:12:52.287 "data_offset": 2048, 00:12:52.287 "data_size": 63488 00:12:52.287 }, 00:12:52.287 { 00:12:52.287 "name": null, 00:12:52.287 "uuid": "e8b1f0bc-e58e-5948-999f-beb57d45717a", 00:12:52.287 "is_configured": false, 00:12:52.287 "data_offset": 2048, 00:12:52.287 "data_size": 63488 00:12:52.287 } 00:12:52.287 ] 00:12:52.287 }' 00:12:52.287 12:04:59 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:52.287 12:04:59 -- common/autotest_common.sh@10 -- # set +x 00:12:52.853 12:04:59 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:12:52.853 12:04:59 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:52.853 [2024-07-25 12:05:00.024608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:52.853 [2024-07-25 12:05:00.024656] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.853 [2024-07-25 12:05:00.024675] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1248f00 00:12:52.853 [2024-07-25 12:05:00.024684] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.853 [2024-07-25 12:05:00.024960] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.853 [2024-07-25 12:05:00.024972] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:52.853 [2024-07-25 12:05:00.025025] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:52.853 [2024-07-25 12:05:00.025039] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.853 pt2 00:12:52.853 12:05:00 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:53.111 [2024-07-25 12:05:00.201095] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:53.111 "name": "raid_bdev1", 00:12:53.111 "uuid": "28a62dab-1df9-4173-b64f-20fabdee404a", 00:12:53.111 "strip_size_kb": 64, 00:12:53.111 "state": "configuring", 00:12:53.111 "raid_level": "concat", 00:12:53.111 "superblock": true, 00:12:53.111 "num_base_bdevs": 4, 00:12:53.111 "num_base_bdevs_discovered": 1, 00:12:53.111 "num_base_bdevs_operational": 4, 00:12:53.111 "base_bdevs_list": [ 00:12:53.111 { 00:12:53.111 "name": "pt1", 00:12:53.111 "uuid": "63308d23-507d-529d-9109-ee4a9a7bca36", 00:12:53.111 "is_configured": true, 00:12:53.111 "data_offset": 2048, 00:12:53.111 "data_size": 63488 00:12:53.111 }, 00:12:53.111 { 00:12:53.111 "name": null, 00:12:53.111 "uuid": "2fa8a741-d86c-580f-98a5-f08280913dd1", 00:12:53.111 "is_configured": false, 00:12:53.111 "data_offset": 2048, 00:12:53.111 "data_size": 63488 00:12:53.111 }, 00:12:53.111 { 00:12:53.111 "name": null, 00:12:53.111 "uuid": "fd944bf0-39ea-5ea1-9d90-48bc46226eda", 00:12:53.111 "is_configured": false, 00:12:53.111 "data_offset": 2048, 00:12:53.111 "data_size": 63488 00:12:53.111 }, 00:12:53.111 { 00:12:53.111 "name": null, 00:12:53.111 "uuid": "e8b1f0bc-e58e-5948-999f-beb57d45717a", 00:12:53.111 "is_configured": false, 00:12:53.111 "data_offset": 2048, 00:12:53.111 "data_size": 63488 00:12:53.111 } 00:12:53.111 ] 00:12:53.111 }' 00:12:53.111 12:05:00 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:53.111 12:05:00 -- common/autotest_common.sh@10 -- # set +x 00:12:53.678 12:05:00 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:12:53.678 12:05:00 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:53.678 12:05:00 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:53.937 [2024-07-25 12:05:01.043229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:53.937 [2024-07-25 12:05:01.043277] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.937 [2024-07-25 12:05:01.043293] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1259be0 00:12:53.937 [2024-07-25 12:05:01.043302] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.937 [2024-07-25 12:05:01.043565] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.937 [2024-07-25 12:05:01.043576] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:53.937 [2024-07-25 12:05:01.043625] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:12:53.937 [2024-07-25 12:05:01.043638] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:53.937 pt2 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:53.937 [2024-07-25 12:05:01.211664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:53.937 [2024-07-25 12:05:01.211699] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:53.937 [2024-07-25 12:05:01.211715] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x125aaa0 00:12:53.937 [2024-07-25 12:05:01.211723] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:53.937 [2024-07-25 12:05:01.211956] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:53.937 [2024-07-25 12:05:01.211968] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:53.937 [2024-07-25 12:05:01.212014] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:12:53.937 [2024-07-25 12:05:01.212026] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:53.937 pt3 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:53.937 12:05:01 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:54.196 [2024-07-25 12:05:01.388115] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:54.196 [2024-07-25 12:05:01.388150] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:54.196 [2024-07-25 12:05:01.388164] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1254740 00:12:54.196 [2024-07-25 12:05:01.388172] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:54.196 [2024-07-25 12:05:01.388417] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:54.196 [2024-07-25 12:05:01.388430] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:54.196 [2024-07-25 12:05:01.388473] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:12:54.196 [2024-07-25 12:05:01.388486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:54.196 [2024-07-25 12:05:01.388569] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x125a5a0 00:12:54.196 [2024-07-25 12:05:01.388576] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:54.196 [2024-07-25 12:05:01.388694] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10af3c0 00:12:54.196 [2024-07-25 12:05:01.388781] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x125a5a0 00:12:54.196 [2024-07-25 12:05:01.388787] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x125a5a0 00:12:54.196 [2024-07-25 12:05:01.388854] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.196 pt4 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=concat 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=64 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.196 12:05:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:54.454 12:05:01 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:54.454 "name": "raid_bdev1", 00:12:54.454 "uuid": "28a62dab-1df9-4173-b64f-20fabdee404a", 00:12:54.454 "strip_size_kb": 64, 00:12:54.454 "state": "online", 00:12:54.454 "raid_level": "concat", 00:12:54.454 "superblock": true, 00:12:54.454 "num_base_bdevs": 4, 00:12:54.454 "num_base_bdevs_discovered": 4, 00:12:54.454 "num_base_bdevs_operational": 4, 00:12:54.454 "base_bdevs_list": [ 00:12:54.454 { 00:12:54.454 "name": "pt1", 00:12:54.454 "uuid": "63308d23-507d-529d-9109-ee4a9a7bca36", 00:12:54.454 "is_configured": true, 00:12:54.454 "data_offset": 2048, 00:12:54.454 "data_size": 63488 00:12:54.454 }, 00:12:54.454 { 00:12:54.454 "name": "pt2", 00:12:54.454 "uuid": "2fa8a741-d86c-580f-98a5-f08280913dd1", 00:12:54.454 "is_configured": true, 00:12:54.454 "data_offset": 2048, 00:12:54.454 "data_size": 63488 00:12:54.454 }, 00:12:54.454 { 00:12:54.454 "name": "pt3", 00:12:54.454 "uuid": "fd944bf0-39ea-5ea1-9d90-48bc46226eda", 00:12:54.454 "is_configured": true, 00:12:54.454 "data_offset": 2048, 00:12:54.454 "data_size": 63488 00:12:54.454 }, 00:12:54.454 { 00:12:54.454 "name": "pt4", 00:12:54.454 "uuid": "e8b1f0bc-e58e-5948-999f-beb57d45717a", 00:12:54.454 "is_configured": true, 00:12:54.454 "data_offset": 2048, 00:12:54.454 "data_size": 63488 00:12:54.454 } 00:12:54.454 ] 00:12:54.454 }' 00:12:54.454 12:05:01 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:54.454 12:05:01 -- common/autotest_common.sh@10 -- # set +x 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:12:55.020 [2024-07-25 12:05:02.222406] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@430 -- # '[' 28a62dab-1df9-4173-b64f-20fabdee404a '!=' 28a62dab-1df9-4173-b64f-20fabdee404a ']' 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@434 -- # has_redundancy concat 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@197 -- # return 1 00:12:55.020 12:05:02 -- bdev/bdev_raid.sh@511 -- # killprocess 1247685 00:12:55.020 12:05:02 -- common/autotest_common.sh@926 -- # '[' -z 1247685 ']' 00:12:55.020 12:05:02 -- common/autotest_common.sh@930 -- # kill -0 1247685 00:12:55.020 12:05:02 -- common/autotest_common.sh@931 -- # uname 00:12:55.020 12:05:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:55.020 12:05:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1247685 00:12:55.020 12:05:02 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:55.020 12:05:02 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:55.020 12:05:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1247685' 00:12:55.020 killing process with pid 1247685 00:12:55.020 12:05:02 -- common/autotest_common.sh@945 -- # kill 1247685 00:12:55.020 [2024-07-25 12:05:02.292672] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:55.020 [2024-07-25 12:05:02.292723] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.020 12:05:02 -- common/autotest_common.sh@950 -- # wait 1247685 00:12:55.020 [2024-07-25 12:05:02.292771] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:55.020 [2024-07-25 12:05:02.292779] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x125a5a0 name raid_bdev1, state offline 00:12:55.278 [2024-07-25 12:05:02.330708] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.278 12:05:02 -- bdev/bdev_raid.sh@513 -- # return 0 00:12:55.278 00:12:55.278 real 0m8.222s 00:12:55.278 user 0m14.239s 00:12:55.278 sys 0m1.656s 00:12:55.279 12:05:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.279 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.279 ************************************ 00:12:55.279 END TEST raid_superblock_test 00:12:55.279 ************************************ 00:12:55.279 12:05:02 -- bdev/bdev_raid.sh@726 -- # for level in raid0 concat raid1 00:12:55.279 12:05:02 -- bdev/bdev_raid.sh@727 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:55.279 12:05:02 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:12:55.279 12:05:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:55.279 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.537 ************************************ 00:12:55.537 START TEST raid_state_function_test 00:12:55.537 ************************************ 00:12:55.537 12:05:02 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 false 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@204 -- # local superblock=false 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@219 -- # '[' false = true ']' 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@222 -- # superblock_create_arg= 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@226 -- # raid_pid=1248942 00:12:55.537 12:05:02 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1248942' 00:12:55.537 Process raid pid: 1248942 00:12:55.538 12:05:02 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:55.538 12:05:02 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1248942 /var/tmp/spdk-raid.sock 00:12:55.538 12:05:02 -- common/autotest_common.sh@819 -- # '[' -z 1248942 ']' 00:12:55.538 12:05:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:55.538 12:05:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:55.538 12:05:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:55.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:55.538 12:05:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:55.538 12:05:02 -- common/autotest_common.sh@10 -- # set +x 00:12:55.538 [2024-07-25 12:05:02.630069] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:12:55.538 [2024-07-25 12:05:02.630122] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:55.538 [2024-07-25 12:05:02.720269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.538 [2024-07-25 12:05:02.805434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.796 [2024-07-25 12:05:02.857239] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.796 [2024-07-25 12:05:02.857267] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:56.363 12:05:03 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:56.363 12:05:03 -- common/autotest_common.sh@852 -- # return 0 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:56.363 [2024-07-25 12:05:03.567475] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.363 [2024-07-25 12:05:03.567503] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.363 [2024-07-25 12:05:03.567510] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:56.363 [2024-07-25 12:05:03.567518] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:56.363 [2024-07-25 12:05:03.567523] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:56.363 [2024-07-25 12:05:03.567530] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:56.363 [2024-07-25 12:05:03.567535] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:56.363 [2024-07-25 12:05:03.567542] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.363 12:05:03 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.622 12:05:03 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:56.622 "name": "Existed_Raid", 00:12:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.622 "strip_size_kb": 0, 00:12:56.622 "state": "configuring", 00:12:56.622 "raid_level": "raid1", 00:12:56.622 "superblock": false, 00:12:56.622 "num_base_bdevs": 4, 00:12:56.622 "num_base_bdevs_discovered": 0, 00:12:56.622 "num_base_bdevs_operational": 4, 00:12:56.622 "base_bdevs_list": [ 00:12:56.622 { 00:12:56.622 "name": "BaseBdev1", 00:12:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.622 "is_configured": false, 00:12:56.622 "data_offset": 0, 00:12:56.622 "data_size": 0 00:12:56.622 }, 00:12:56.622 { 00:12:56.622 "name": "BaseBdev2", 00:12:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.622 "is_configured": false, 00:12:56.622 "data_offset": 0, 00:12:56.622 "data_size": 0 00:12:56.622 }, 00:12:56.622 { 00:12:56.622 "name": "BaseBdev3", 00:12:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.622 "is_configured": false, 00:12:56.622 "data_offset": 0, 00:12:56.622 "data_size": 0 00:12:56.622 }, 00:12:56.622 { 00:12:56.622 "name": "BaseBdev4", 00:12:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.622 "is_configured": false, 00:12:56.622 "data_offset": 0, 00:12:56.622 "data_size": 0 00:12:56.622 } 00:12:56.622 ] 00:12:56.622 }' 00:12:56.622 12:05:03 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:56.622 12:05:03 -- common/autotest_common.sh@10 -- # set +x 00:12:57.188 12:05:04 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:57.188 [2024-07-25 12:05:04.397541] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:57.188 [2024-07-25 12:05:04.397565] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1617d80 name Existed_Raid, state configuring 00:12:57.188 12:05:04 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:57.447 [2024-07-25 12:05:04.565977] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:57.447 [2024-07-25 12:05:04.566001] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:57.447 [2024-07-25 12:05:04.566007] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:57.447 [2024-07-25 12:05:04.566015] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:57.447 [2024-07-25 12:05:04.566037] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:57.447 [2024-07-25 12:05:04.566044] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:57.447 [2024-07-25 12:05:04.566050] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:57.447 [2024-07-25 12:05:04.566058] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:57.447 12:05:04 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:57.447 [2024-07-25 12:05:04.744206] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.447 BaseBdev1 00:12:57.706 12:05:04 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:12:57.706 12:05:04 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:12:57.706 12:05:04 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:57.706 12:05:04 -- common/autotest_common.sh@889 -- # local i 00:12:57.706 12:05:04 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:57.706 12:05:04 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:57.706 12:05:04 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:57.706 12:05:04 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:57.965 [ 00:12:57.965 { 00:12:57.965 "name": "BaseBdev1", 00:12:57.965 "aliases": [ 00:12:57.965 "e6e49119-5308-4f63-934b-9a1bd70689b3" 00:12:57.965 ], 00:12:57.965 "product_name": "Malloc disk", 00:12:57.965 "block_size": 512, 00:12:57.965 "num_blocks": 65536, 00:12:57.965 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:12:57.965 "assigned_rate_limits": { 00:12:57.965 "rw_ios_per_sec": 0, 00:12:57.965 "rw_mbytes_per_sec": 0, 00:12:57.965 "r_mbytes_per_sec": 0, 00:12:57.965 "w_mbytes_per_sec": 0 00:12:57.965 }, 00:12:57.965 "claimed": true, 00:12:57.965 "claim_type": "exclusive_write", 00:12:57.965 "zoned": false, 00:12:57.965 "supported_io_types": { 00:12:57.965 "read": true, 00:12:57.965 "write": true, 00:12:57.965 "unmap": true, 00:12:57.965 "write_zeroes": true, 00:12:57.965 "flush": true, 00:12:57.965 "reset": true, 00:12:57.965 "compare": false, 00:12:57.965 "compare_and_write": false, 00:12:57.965 "abort": true, 00:12:57.965 "nvme_admin": false, 00:12:57.965 "nvme_io": false 00:12:57.965 }, 00:12:57.965 "memory_domains": [ 00:12:57.965 { 00:12:57.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.965 "dma_device_type": 2 00:12:57.965 } 00:12:57.965 ], 00:12:57.965 "driver_specific": {} 00:12:57.965 } 00:12:57.965 ] 00:12:57.965 12:05:05 -- common/autotest_common.sh@895 -- # return 0 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:57.965 "name": "Existed_Raid", 00:12:57.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.965 "strip_size_kb": 0, 00:12:57.965 "state": "configuring", 00:12:57.965 "raid_level": "raid1", 00:12:57.965 "superblock": false, 00:12:57.965 "num_base_bdevs": 4, 00:12:57.965 "num_base_bdevs_discovered": 1, 00:12:57.965 "num_base_bdevs_operational": 4, 00:12:57.965 "base_bdevs_list": [ 00:12:57.965 { 00:12:57.965 "name": "BaseBdev1", 00:12:57.965 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:12:57.965 "is_configured": true, 00:12:57.965 "data_offset": 0, 00:12:57.965 "data_size": 65536 00:12:57.965 }, 00:12:57.965 { 00:12:57.965 "name": "BaseBdev2", 00:12:57.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.965 "is_configured": false, 00:12:57.965 "data_offset": 0, 00:12:57.965 "data_size": 0 00:12:57.965 }, 00:12:57.965 { 00:12:57.965 "name": "BaseBdev3", 00:12:57.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.965 "is_configured": false, 00:12:57.965 "data_offset": 0, 00:12:57.965 "data_size": 0 00:12:57.965 }, 00:12:57.965 { 00:12:57.965 "name": "BaseBdev4", 00:12:57.965 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:57.965 "is_configured": false, 00:12:57.965 "data_offset": 0, 00:12:57.965 "data_size": 0 00:12:57.965 } 00:12:57.965 ] 00:12:57.965 }' 00:12:57.965 12:05:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:57.965 12:05:05 -- common/autotest_common.sh@10 -- # set +x 00:12:58.533 12:05:05 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:58.791 [2024-07-25 12:05:05.915211] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:58.791 [2024-07-25 12:05:05.915246] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1618000 name Existed_Raid, state configuring 00:12:58.791 12:05:05 -- bdev/bdev_raid.sh@244 -- # '[' false = true ']' 00:12:58.791 12:05:05 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:58.791 [2024-07-25 12:05:06.083663] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:58.791 [2024-07-25 12:05:06.084758] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:58.791 [2024-07-25 12:05:06.084783] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:58.791 [2024-07-25 12:05:06.084790] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:58.791 [2024-07-25 12:05:06.084798] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:58.791 [2024-07-25 12:05:06.084819] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:58.791 [2024-07-25 12:05:06.084827] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:12:58.791 12:05:06 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@125 -- # local tmp 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:12:59.050 "name": "Existed_Raid", 00:12:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.050 "strip_size_kb": 0, 00:12:59.050 "state": "configuring", 00:12:59.050 "raid_level": "raid1", 00:12:59.050 "superblock": false, 00:12:59.050 "num_base_bdevs": 4, 00:12:59.050 "num_base_bdevs_discovered": 1, 00:12:59.050 "num_base_bdevs_operational": 4, 00:12:59.050 "base_bdevs_list": [ 00:12:59.050 { 00:12:59.050 "name": "BaseBdev1", 00:12:59.050 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:12:59.050 "is_configured": true, 00:12:59.050 "data_offset": 0, 00:12:59.050 "data_size": 65536 00:12:59.050 }, 00:12:59.050 { 00:12:59.050 "name": "BaseBdev2", 00:12:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.050 "is_configured": false, 00:12:59.050 "data_offset": 0, 00:12:59.050 "data_size": 0 00:12:59.050 }, 00:12:59.050 { 00:12:59.050 "name": "BaseBdev3", 00:12:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.050 "is_configured": false, 00:12:59.050 "data_offset": 0, 00:12:59.050 "data_size": 0 00:12:59.050 }, 00:12:59.050 { 00:12:59.050 "name": "BaseBdev4", 00:12:59.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.050 "is_configured": false, 00:12:59.050 "data_offset": 0, 00:12:59.050 "data_size": 0 00:12:59.050 } 00:12:59.050 ] 00:12:59.050 }' 00:12:59.050 12:05:06 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:12:59.050 12:05:06 -- common/autotest_common.sh@10 -- # set +x 00:12:59.617 12:05:06 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:59.876 [2024-07-25 12:05:06.940745] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:59.876 BaseBdev2 00:12:59.876 12:05:06 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:12:59.876 12:05:06 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:12:59.876 12:05:06 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:12:59.876 12:05:06 -- common/autotest_common.sh@889 -- # local i 00:12:59.876 12:05:06 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:12:59.876 12:05:06 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:12:59.876 12:05:06 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:59.876 12:05:07 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:00.134 [ 00:13:00.134 { 00:13:00.134 "name": "BaseBdev2", 00:13:00.134 "aliases": [ 00:13:00.134 "20266428-4027-4a61-bcfa-73d2822d7562" 00:13:00.134 ], 00:13:00.134 "product_name": "Malloc disk", 00:13:00.134 "block_size": 512, 00:13:00.134 "num_blocks": 65536, 00:13:00.134 "uuid": "20266428-4027-4a61-bcfa-73d2822d7562", 00:13:00.134 "assigned_rate_limits": { 00:13:00.134 "rw_ios_per_sec": 0, 00:13:00.134 "rw_mbytes_per_sec": 0, 00:13:00.134 "r_mbytes_per_sec": 0, 00:13:00.134 "w_mbytes_per_sec": 0 00:13:00.134 }, 00:13:00.134 "claimed": true, 00:13:00.134 "claim_type": "exclusive_write", 00:13:00.134 "zoned": false, 00:13:00.134 "supported_io_types": { 00:13:00.134 "read": true, 00:13:00.134 "write": true, 00:13:00.134 "unmap": true, 00:13:00.134 "write_zeroes": true, 00:13:00.134 "flush": true, 00:13:00.134 "reset": true, 00:13:00.134 "compare": false, 00:13:00.134 "compare_and_write": false, 00:13:00.134 "abort": true, 00:13:00.134 "nvme_admin": false, 00:13:00.134 "nvme_io": false 00:13:00.134 }, 00:13:00.134 "memory_domains": [ 00:13:00.134 { 00:13:00.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.134 "dma_device_type": 2 00:13:00.134 } 00:13:00.134 ], 00:13:00.134 "driver_specific": {} 00:13:00.134 } 00:13:00.134 ] 00:13:00.134 12:05:07 -- common/autotest_common.sh@895 -- # return 0 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:00.134 12:05:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.135 12:05:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.464 12:05:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:00.464 "name": "Existed_Raid", 00:13:00.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.464 "strip_size_kb": 0, 00:13:00.464 "state": "configuring", 00:13:00.464 "raid_level": "raid1", 00:13:00.464 "superblock": false, 00:13:00.464 "num_base_bdevs": 4, 00:13:00.464 "num_base_bdevs_discovered": 2, 00:13:00.464 "num_base_bdevs_operational": 4, 00:13:00.464 "base_bdevs_list": [ 00:13:00.464 { 00:13:00.464 "name": "BaseBdev1", 00:13:00.464 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:13:00.464 "is_configured": true, 00:13:00.464 "data_offset": 0, 00:13:00.464 "data_size": 65536 00:13:00.464 }, 00:13:00.464 { 00:13:00.464 "name": "BaseBdev2", 00:13:00.464 "uuid": "20266428-4027-4a61-bcfa-73d2822d7562", 00:13:00.464 "is_configured": true, 00:13:00.464 "data_offset": 0, 00:13:00.464 "data_size": 65536 00:13:00.464 }, 00:13:00.464 { 00:13:00.464 "name": "BaseBdev3", 00:13:00.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.464 "is_configured": false, 00:13:00.464 "data_offset": 0, 00:13:00.464 "data_size": 0 00:13:00.464 }, 00:13:00.464 { 00:13:00.464 "name": "BaseBdev4", 00:13:00.464 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:00.465 "is_configured": false, 00:13:00.465 "data_offset": 0, 00:13:00.465 "data_size": 0 00:13:00.465 } 00:13:00.465 ] 00:13:00.465 }' 00:13:00.465 12:05:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:00.465 12:05:07 -- common/autotest_common.sh@10 -- # set +x 00:13:00.744 12:05:07 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:01.003 [2024-07-25 12:05:08.118645] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.003 BaseBdev3 00:13:01.003 12:05:08 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:01.003 12:05:08 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:01.003 12:05:08 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:01.003 12:05:08 -- common/autotest_common.sh@889 -- # local i 00:13:01.003 12:05:08 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:01.003 12:05:08 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:01.003 12:05:08 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:01.262 12:05:08 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:01.262 [ 00:13:01.262 { 00:13:01.262 "name": "BaseBdev3", 00:13:01.262 "aliases": [ 00:13:01.262 "488d7b0f-127a-4d4d-bc80-3bec304ad107" 00:13:01.263 ], 00:13:01.263 "product_name": "Malloc disk", 00:13:01.263 "block_size": 512, 00:13:01.263 "num_blocks": 65536, 00:13:01.263 "uuid": "488d7b0f-127a-4d4d-bc80-3bec304ad107", 00:13:01.263 "assigned_rate_limits": { 00:13:01.263 "rw_ios_per_sec": 0, 00:13:01.263 "rw_mbytes_per_sec": 0, 00:13:01.263 "r_mbytes_per_sec": 0, 00:13:01.263 "w_mbytes_per_sec": 0 00:13:01.263 }, 00:13:01.263 "claimed": true, 00:13:01.263 "claim_type": "exclusive_write", 00:13:01.263 "zoned": false, 00:13:01.263 "supported_io_types": { 00:13:01.263 "read": true, 00:13:01.263 "write": true, 00:13:01.263 "unmap": true, 00:13:01.263 "write_zeroes": true, 00:13:01.263 "flush": true, 00:13:01.263 "reset": true, 00:13:01.263 "compare": false, 00:13:01.263 "compare_and_write": false, 00:13:01.263 "abort": true, 00:13:01.263 "nvme_admin": false, 00:13:01.263 "nvme_io": false 00:13:01.263 }, 00:13:01.263 "memory_domains": [ 00:13:01.263 { 00:13:01.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.263 "dma_device_type": 2 00:13:01.263 } 00:13:01.263 ], 00:13:01.263 "driver_specific": {} 00:13:01.263 } 00:13:01.263 ] 00:13:01.263 12:05:08 -- common/autotest_common.sh@895 -- # return 0 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.263 12:05:08 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:01.522 12:05:08 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:01.522 "name": "Existed_Raid", 00:13:01.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.522 "strip_size_kb": 0, 00:13:01.522 "state": "configuring", 00:13:01.522 "raid_level": "raid1", 00:13:01.522 "superblock": false, 00:13:01.522 "num_base_bdevs": 4, 00:13:01.522 "num_base_bdevs_discovered": 3, 00:13:01.522 "num_base_bdevs_operational": 4, 00:13:01.522 "base_bdevs_list": [ 00:13:01.522 { 00:13:01.522 "name": "BaseBdev1", 00:13:01.522 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:13:01.522 "is_configured": true, 00:13:01.522 "data_offset": 0, 00:13:01.522 "data_size": 65536 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "name": "BaseBdev2", 00:13:01.522 "uuid": "20266428-4027-4a61-bcfa-73d2822d7562", 00:13:01.522 "is_configured": true, 00:13:01.522 "data_offset": 0, 00:13:01.522 "data_size": 65536 00:13:01.522 }, 00:13:01.522 { 00:13:01.522 "name": "BaseBdev3", 00:13:01.522 "uuid": "488d7b0f-127a-4d4d-bc80-3bec304ad107", 00:13:01.522 "is_configured": true, 00:13:01.522 "data_offset": 0, 00:13:01.522 "data_size": 65536 00:13:01.522 }, 00:13:01.522 { 00:13:01.523 "name": "BaseBdev4", 00:13:01.523 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:01.523 "is_configured": false, 00:13:01.523 "data_offset": 0, 00:13:01.523 "data_size": 0 00:13:01.523 } 00:13:01.523 ] 00:13:01.523 }' 00:13:01.523 12:05:08 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:01.523 12:05:08 -- common/autotest_common.sh@10 -- # set +x 00:13:02.090 12:05:09 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:02.090 [2024-07-25 12:05:09.268455] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:02.090 [2024-07-25 12:05:09.268485] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x16175f0 00:13:02.090 [2024-07-25 12:05:09.268491] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:02.090 [2024-07-25 12:05:09.268665] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x161be40 00:13:02.090 [2024-07-25 12:05:09.268751] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16175f0 00:13:02.090 [2024-07-25 12:05:09.268757] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x16175f0 00:13:02.090 [2024-07-25 12:05:09.268876] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:02.090 BaseBdev4 00:13:02.090 12:05:09 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:02.090 12:05:09 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:02.090 12:05:09 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:02.090 12:05:09 -- common/autotest_common.sh@889 -- # local i 00:13:02.090 12:05:09 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:02.090 12:05:09 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:02.090 12:05:09 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.349 12:05:09 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:02.349 [ 00:13:02.349 { 00:13:02.349 "name": "BaseBdev4", 00:13:02.349 "aliases": [ 00:13:02.349 "f0ed23f9-4770-4810-b10f-35cd5947bf6f" 00:13:02.349 ], 00:13:02.349 "product_name": "Malloc disk", 00:13:02.349 "block_size": 512, 00:13:02.349 "num_blocks": 65536, 00:13:02.349 "uuid": "f0ed23f9-4770-4810-b10f-35cd5947bf6f", 00:13:02.349 "assigned_rate_limits": { 00:13:02.349 "rw_ios_per_sec": 0, 00:13:02.349 "rw_mbytes_per_sec": 0, 00:13:02.349 "r_mbytes_per_sec": 0, 00:13:02.349 "w_mbytes_per_sec": 0 00:13:02.349 }, 00:13:02.349 "claimed": true, 00:13:02.349 "claim_type": "exclusive_write", 00:13:02.349 "zoned": false, 00:13:02.349 "supported_io_types": { 00:13:02.349 "read": true, 00:13:02.349 "write": true, 00:13:02.349 "unmap": true, 00:13:02.349 "write_zeroes": true, 00:13:02.349 "flush": true, 00:13:02.349 "reset": true, 00:13:02.349 "compare": false, 00:13:02.349 "compare_and_write": false, 00:13:02.349 "abort": true, 00:13:02.349 "nvme_admin": false, 00:13:02.349 "nvme_io": false 00:13:02.349 }, 00:13:02.349 "memory_domains": [ 00:13:02.349 { 00:13:02.349 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.349 "dma_device_type": 2 00:13:02.349 } 00:13:02.349 ], 00:13:02.349 "driver_specific": {} 00:13:02.349 } 00:13:02.349 ] 00:13:02.349 12:05:09 -- common/autotest_common.sh@895 -- # return 0 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:02.349 12:05:09 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:02.607 12:05:09 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:02.607 "name": "Existed_Raid", 00:13:02.607 "uuid": "6edae6f8-818f-4d54-8eb5-757f2dfdbd23", 00:13:02.607 "strip_size_kb": 0, 00:13:02.607 "state": "online", 00:13:02.607 "raid_level": "raid1", 00:13:02.607 "superblock": false, 00:13:02.607 "num_base_bdevs": 4, 00:13:02.607 "num_base_bdevs_discovered": 4, 00:13:02.607 "num_base_bdevs_operational": 4, 00:13:02.607 "base_bdevs_list": [ 00:13:02.607 { 00:13:02.607 "name": "BaseBdev1", 00:13:02.607 "uuid": "e6e49119-5308-4f63-934b-9a1bd70689b3", 00:13:02.607 "is_configured": true, 00:13:02.607 "data_offset": 0, 00:13:02.607 "data_size": 65536 00:13:02.607 }, 00:13:02.607 { 00:13:02.607 "name": "BaseBdev2", 00:13:02.607 "uuid": "20266428-4027-4a61-bcfa-73d2822d7562", 00:13:02.607 "is_configured": true, 00:13:02.607 "data_offset": 0, 00:13:02.607 "data_size": 65536 00:13:02.607 }, 00:13:02.607 { 00:13:02.607 "name": "BaseBdev3", 00:13:02.607 "uuid": "488d7b0f-127a-4d4d-bc80-3bec304ad107", 00:13:02.607 "is_configured": true, 00:13:02.607 "data_offset": 0, 00:13:02.607 "data_size": 65536 00:13:02.607 }, 00:13:02.607 { 00:13:02.607 "name": "BaseBdev4", 00:13:02.607 "uuid": "f0ed23f9-4770-4810-b10f-35cd5947bf6f", 00:13:02.607 "is_configured": true, 00:13:02.607 "data_offset": 0, 00:13:02.607 "data_size": 65536 00:13:02.607 } 00:13:02.607 ] 00:13:02.607 }' 00:13:02.607 12:05:09 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:02.607 12:05:09 -- common/autotest_common.sh@10 -- # set +x 00:13:03.173 12:05:10 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:03.173 [2024-07-25 12:05:10.463676] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:03.432 "name": "Existed_Raid", 00:13:03.432 "uuid": "6edae6f8-818f-4d54-8eb5-757f2dfdbd23", 00:13:03.432 "strip_size_kb": 0, 00:13:03.432 "state": "online", 00:13:03.432 "raid_level": "raid1", 00:13:03.432 "superblock": false, 00:13:03.432 "num_base_bdevs": 4, 00:13:03.432 "num_base_bdevs_discovered": 3, 00:13:03.432 "num_base_bdevs_operational": 3, 00:13:03.432 "base_bdevs_list": [ 00:13:03.432 { 00:13:03.432 "name": null, 00:13:03.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:03.432 "is_configured": false, 00:13:03.432 "data_offset": 0, 00:13:03.432 "data_size": 65536 00:13:03.432 }, 00:13:03.432 { 00:13:03.432 "name": "BaseBdev2", 00:13:03.432 "uuid": "20266428-4027-4a61-bcfa-73d2822d7562", 00:13:03.432 "is_configured": true, 00:13:03.432 "data_offset": 0, 00:13:03.432 "data_size": 65536 00:13:03.432 }, 00:13:03.432 { 00:13:03.432 "name": "BaseBdev3", 00:13:03.432 "uuid": "488d7b0f-127a-4d4d-bc80-3bec304ad107", 00:13:03.432 "is_configured": true, 00:13:03.432 "data_offset": 0, 00:13:03.432 "data_size": 65536 00:13:03.432 }, 00:13:03.432 { 00:13:03.432 "name": "BaseBdev4", 00:13:03.432 "uuid": "f0ed23f9-4770-4810-b10f-35cd5947bf6f", 00:13:03.432 "is_configured": true, 00:13:03.432 "data_offset": 0, 00:13:03.432 "data_size": 65536 00:13:03.432 } 00:13:03.432 ] 00:13:03.432 }' 00:13:03.432 12:05:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:03.432 12:05:10 -- common/autotest_common.sh@10 -- # set +x 00:13:03.999 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:03.999 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:03.999 12:05:11 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.999 12:05:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:04.256 [2024-07-25 12:05:11.487107] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:04.256 12:05:11 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.514 12:05:11 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:04.514 12:05:11 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.514 12:05:11 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:04.773 [2024-07-25 12:05:11.846031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:04.773 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:04.773 12:05:11 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:04.773 12:05:11 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.773 12:05:11 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:04.773 12:05:12 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:04.773 12:05:12 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:04.773 12:05:12 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:05.030 [2024-07-25 12:05:12.186357] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:05.030 [2024-07-25 12:05:12.186380] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.030 [2024-07-25 12:05:12.186407] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.030 [2024-07-25 12:05:12.198241] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.030 [2024-07-25 12:05:12.198268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16175f0 name Existed_Raid, state offline 00:13:05.031 12:05:12 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:05.031 12:05:12 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:05.031 12:05:12 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.031 12:05:12 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:05.289 12:05:12 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:05.289 12:05:12 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:05.289 12:05:12 -- bdev/bdev_raid.sh@287 -- # killprocess 1248942 00:13:05.289 12:05:12 -- common/autotest_common.sh@926 -- # '[' -z 1248942 ']' 00:13:05.289 12:05:12 -- common/autotest_common.sh@930 -- # kill -0 1248942 00:13:05.289 12:05:12 -- common/autotest_common.sh@931 -- # uname 00:13:05.289 12:05:12 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:05.289 12:05:12 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1248942 00:13:05.289 12:05:12 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:05.289 12:05:12 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:05.289 12:05:12 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1248942' 00:13:05.289 killing process with pid 1248942 00:13:05.289 12:05:12 -- common/autotest_common.sh@945 -- # kill 1248942 00:13:05.289 [2024-07-25 12:05:12.415678] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:05.289 12:05:12 -- common/autotest_common.sh@950 -- # wait 1248942 00:13:05.289 [2024-07-25 12:05:12.416582] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:05.547 00:13:05.547 real 0m10.051s 00:13:05.547 user 0m17.718s 00:13:05.547 sys 0m2.015s 00:13:05.547 12:05:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:05.547 12:05:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.547 ************************************ 00:13:05.547 END TEST raid_state_function_test 00:13:05.547 ************************************ 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@728 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:05.547 12:05:12 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:13:05.547 12:05:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:05.547 12:05:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.547 ************************************ 00:13:05.547 START TEST raid_state_function_test_sb 00:13:05.547 ************************************ 00:13:05.547 12:05:12 -- common/autotest_common.sh@1104 -- # raid_state_function_test raid1 4 true 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@202 -- # local raid_level=raid1 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@203 -- # local num_base_bdevs=4 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@204 -- # local superblock=true 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@205 -- # local raid_bdev 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i = 1 )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev1 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev2 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev3 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@208 -- # echo BaseBdev4 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i++ )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # (( i <= num_base_bdevs )) 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@206 -- # local base_bdevs 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@207 -- # local raid_bdev_name=Existed_Raid 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@208 -- # local strip_size 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@209 -- # local strip_size_create_arg 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@210 -- # local superblock_create_arg 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@212 -- # '[' raid1 '!=' raid1 ']' 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@216 -- # strip_size=0 00:13:05.547 12:05:12 -- bdev/bdev_raid.sh@219 -- # '[' true = true ']' 00:13:05.548 12:05:12 -- bdev/bdev_raid.sh@220 -- # superblock_create_arg=-s 00:13:05.548 12:05:12 -- bdev/bdev_raid.sh@226 -- # raid_pid=1250632 00:13:05.548 12:05:12 -- bdev/bdev_raid.sh@227 -- # echo 'Process raid pid: 1250632' 00:13:05.548 Process raid pid: 1250632 00:13:05.548 12:05:12 -- bdev/bdev_raid.sh@225 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:05.548 12:05:12 -- bdev/bdev_raid.sh@228 -- # waitforlisten 1250632 /var/tmp/spdk-raid.sock 00:13:05.548 12:05:12 -- common/autotest_common.sh@819 -- # '[' -z 1250632 ']' 00:13:05.548 12:05:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:05.548 12:05:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:05.548 12:05:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:05.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:05.548 12:05:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:05.548 12:05:12 -- common/autotest_common.sh@10 -- # set +x 00:13:05.548 [2024-07-25 12:05:12.747436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:05.548 [2024-07-25 12:05:12.747486] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:05.548 [2024-07-25 12:05:12.836325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.805 [2024-07-25 12:05:12.922702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.805 [2024-07-25 12:05:12.972826] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.805 [2024-07-25 12:05:12.972853] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.372 12:05:13 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:06.372 12:05:13 -- common/autotest_common.sh@852 -- # return 0 00:13:06.372 12:05:13 -- bdev/bdev_raid.sh@232 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:06.631 [2024-07-25 12:05:13.721781] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:06.631 [2024-07-25 12:05:13.721813] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:06.631 [2024-07-25 12:05:13.721820] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:06.631 [2024-07-25 12:05:13.721828] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:06.631 [2024-07-25 12:05:13.721849] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:06.631 [2024-07-25 12:05:13.721857] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:06.631 [2024-07-25 12:05:13.721862] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:06.631 [2024-07-25 12:05:13.721869] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@233 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:06.631 "name": "Existed_Raid", 00:13:06.631 "uuid": "48b786e1-7cec-4781-a6dd-6f53b2f7e3f0", 00:13:06.631 "strip_size_kb": 0, 00:13:06.631 "state": "configuring", 00:13:06.631 "raid_level": "raid1", 00:13:06.631 "superblock": true, 00:13:06.631 "num_base_bdevs": 4, 00:13:06.631 "num_base_bdevs_discovered": 0, 00:13:06.631 "num_base_bdevs_operational": 4, 00:13:06.631 "base_bdevs_list": [ 00:13:06.631 { 00:13:06.631 "name": "BaseBdev1", 00:13:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.631 "is_configured": false, 00:13:06.631 "data_offset": 0, 00:13:06.631 "data_size": 0 00:13:06.631 }, 00:13:06.631 { 00:13:06.631 "name": "BaseBdev2", 00:13:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.631 "is_configured": false, 00:13:06.631 "data_offset": 0, 00:13:06.631 "data_size": 0 00:13:06.631 }, 00:13:06.631 { 00:13:06.631 "name": "BaseBdev3", 00:13:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.631 "is_configured": false, 00:13:06.631 "data_offset": 0, 00:13:06.631 "data_size": 0 00:13:06.631 }, 00:13:06.631 { 00:13:06.631 "name": "BaseBdev4", 00:13:06.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.631 "is_configured": false, 00:13:06.631 "data_offset": 0, 00:13:06.631 "data_size": 0 00:13:06.631 } 00:13:06.631 ] 00:13:06.631 }' 00:13:06.631 12:05:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:06.631 12:05:13 -- common/autotest_common.sh@10 -- # set +x 00:13:07.197 12:05:14 -- bdev/bdev_raid.sh@234 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:07.197 [2024-07-25 12:05:14.475652] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:07.197 [2024-07-25 12:05:14.475678] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2273d80 name Existed_Raid, state configuring 00:13:07.197 12:05:14 -- bdev/bdev_raid.sh@238 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:07.455 [2024-07-25 12:05:14.640100] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:07.455 [2024-07-25 12:05:14.640127] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:07.455 [2024-07-25 12:05:14.640133] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:07.455 [2024-07-25 12:05:14.640140] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:07.455 [2024-07-25 12:05:14.640145] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:07.455 [2024-07-25 12:05:14.640152] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:07.455 [2024-07-25 12:05:14.640157] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:07.455 [2024-07-25 12:05:14.640163] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:07.455 12:05:14 -- bdev/bdev_raid.sh@239 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:07.714 [2024-07-25 12:05:14.814478] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:07.714 BaseBdev1 00:13:07.714 12:05:14 -- bdev/bdev_raid.sh@240 -- # waitforbdev BaseBdev1 00:13:07.714 12:05:14 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:07.714 12:05:14 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:07.714 12:05:14 -- common/autotest_common.sh@889 -- # local i 00:13:07.714 12:05:14 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:07.714 12:05:14 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:07.714 12:05:14 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:07.714 12:05:14 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:07.972 [ 00:13:07.972 { 00:13:07.972 "name": "BaseBdev1", 00:13:07.972 "aliases": [ 00:13:07.972 "bb3393da-f903-42bf-8f07-f2ee1625aea1" 00:13:07.972 ], 00:13:07.972 "product_name": "Malloc disk", 00:13:07.972 "block_size": 512, 00:13:07.972 "num_blocks": 65536, 00:13:07.972 "uuid": "bb3393da-f903-42bf-8f07-f2ee1625aea1", 00:13:07.972 "assigned_rate_limits": { 00:13:07.972 "rw_ios_per_sec": 0, 00:13:07.972 "rw_mbytes_per_sec": 0, 00:13:07.972 "r_mbytes_per_sec": 0, 00:13:07.972 "w_mbytes_per_sec": 0 00:13:07.972 }, 00:13:07.972 "claimed": true, 00:13:07.972 "claim_type": "exclusive_write", 00:13:07.972 "zoned": false, 00:13:07.972 "supported_io_types": { 00:13:07.972 "read": true, 00:13:07.972 "write": true, 00:13:07.972 "unmap": true, 00:13:07.972 "write_zeroes": true, 00:13:07.972 "flush": true, 00:13:07.972 "reset": true, 00:13:07.972 "compare": false, 00:13:07.972 "compare_and_write": false, 00:13:07.972 "abort": true, 00:13:07.972 "nvme_admin": false, 00:13:07.972 "nvme_io": false 00:13:07.972 }, 00:13:07.972 "memory_domains": [ 00:13:07.973 { 00:13:07.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.973 "dma_device_type": 2 00:13:07.973 } 00:13:07.973 ], 00:13:07.973 "driver_specific": {} 00:13:07.973 } 00:13:07.973 ] 00:13:07.973 12:05:15 -- common/autotest_common.sh@895 -- # return 0 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@241 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.973 12:05:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.232 12:05:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:08.232 "name": "Existed_Raid", 00:13:08.232 "uuid": "7e2834a7-5cf0-40ef-aa01-3de7ed927b84", 00:13:08.232 "strip_size_kb": 0, 00:13:08.232 "state": "configuring", 00:13:08.232 "raid_level": "raid1", 00:13:08.232 "superblock": true, 00:13:08.232 "num_base_bdevs": 4, 00:13:08.232 "num_base_bdevs_discovered": 1, 00:13:08.232 "num_base_bdevs_operational": 4, 00:13:08.232 "base_bdevs_list": [ 00:13:08.232 { 00:13:08.232 "name": "BaseBdev1", 00:13:08.232 "uuid": "bb3393da-f903-42bf-8f07-f2ee1625aea1", 00:13:08.232 "is_configured": true, 00:13:08.232 "data_offset": 2048, 00:13:08.232 "data_size": 63488 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "name": "BaseBdev2", 00:13:08.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.232 "is_configured": false, 00:13:08.232 "data_offset": 0, 00:13:08.232 "data_size": 0 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "name": "BaseBdev3", 00:13:08.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.232 "is_configured": false, 00:13:08.232 "data_offset": 0, 00:13:08.232 "data_size": 0 00:13:08.232 }, 00:13:08.232 { 00:13:08.232 "name": "BaseBdev4", 00:13:08.232 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.232 "is_configured": false, 00:13:08.232 "data_offset": 0, 00:13:08.232 "data_size": 0 00:13:08.232 } 00:13:08.232 ] 00:13:08.232 }' 00:13:08.232 12:05:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:08.232 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:13:08.490 12:05:15 -- bdev/bdev_raid.sh@242 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:08.750 [2024-07-25 12:05:15.885235] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:08.750 [2024-07-25 12:05:15.885268] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2274000 name Existed_Raid, state configuring 00:13:08.750 12:05:15 -- bdev/bdev_raid.sh@244 -- # '[' true = true ']' 00:13:08.750 12:05:15 -- bdev/bdev_raid.sh@246 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:09.008 12:05:16 -- bdev/bdev_raid.sh@247 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:09.008 BaseBdev1 00:13:09.008 12:05:16 -- bdev/bdev_raid.sh@248 -- # waitforbdev BaseBdev1 00:13:09.008 12:05:16 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev1 00:13:09.008 12:05:16 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:09.008 12:05:16 -- common/autotest_common.sh@889 -- # local i 00:13:09.008 12:05:16 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:09.008 12:05:16 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:09.008 12:05:16 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:09.267 12:05:16 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:09.267 [ 00:13:09.267 { 00:13:09.267 "name": "BaseBdev1", 00:13:09.267 "aliases": [ 00:13:09.267 "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2" 00:13:09.267 ], 00:13:09.267 "product_name": "Malloc disk", 00:13:09.267 "block_size": 512, 00:13:09.267 "num_blocks": 65536, 00:13:09.267 "uuid": "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2", 00:13:09.267 "assigned_rate_limits": { 00:13:09.267 "rw_ios_per_sec": 0, 00:13:09.267 "rw_mbytes_per_sec": 0, 00:13:09.267 "r_mbytes_per_sec": 0, 00:13:09.267 "w_mbytes_per_sec": 0 00:13:09.267 }, 00:13:09.267 "claimed": false, 00:13:09.267 "zoned": false, 00:13:09.267 "supported_io_types": { 00:13:09.267 "read": true, 00:13:09.267 "write": true, 00:13:09.267 "unmap": true, 00:13:09.267 "write_zeroes": true, 00:13:09.267 "flush": true, 00:13:09.267 "reset": true, 00:13:09.267 "compare": false, 00:13:09.267 "compare_and_write": false, 00:13:09.267 "abort": true, 00:13:09.267 "nvme_admin": false, 00:13:09.267 "nvme_io": false 00:13:09.267 }, 00:13:09.267 "memory_domains": [ 00:13:09.267 { 00:13:09.267 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:09.267 "dma_device_type": 2 00:13:09.267 } 00:13:09.267 ], 00:13:09.267 "driver_specific": {} 00:13:09.267 } 00:13:09.267 ] 00:13:09.267 12:05:16 -- common/autotest_common.sh@895 -- # return 0 00:13:09.267 12:05:16 -- bdev/bdev_raid.sh@253 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:09.526 [2024-07-25 12:05:16.721021] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:09.526 [2024-07-25 12:05:16.722114] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:09.526 [2024-07-25 12:05:16.722150] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:09.526 [2024-07-25 12:05:16.722156] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:09.526 [2024-07-25 12:05:16.722179] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:09.526 [2024-07-25 12:05:16.722185] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:09.526 [2024-07-25 12:05:16.722192] bdev_raid_rpc.c: 302:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@254 -- # (( i = 1 )) 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.526 12:05:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:09.784 12:05:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:09.784 "name": "Existed_Raid", 00:13:09.784 "uuid": "be82a939-d925-4406-9a34-c95347723255", 00:13:09.784 "strip_size_kb": 0, 00:13:09.784 "state": "configuring", 00:13:09.784 "raid_level": "raid1", 00:13:09.784 "superblock": true, 00:13:09.784 "num_base_bdevs": 4, 00:13:09.784 "num_base_bdevs_discovered": 1, 00:13:09.784 "num_base_bdevs_operational": 4, 00:13:09.784 "base_bdevs_list": [ 00:13:09.784 { 00:13:09.784 "name": "BaseBdev1", 00:13:09.784 "uuid": "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2", 00:13:09.784 "is_configured": true, 00:13:09.784 "data_offset": 2048, 00:13:09.784 "data_size": 63488 00:13:09.784 }, 00:13:09.784 { 00:13:09.784 "name": "BaseBdev2", 00:13:09.784 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.784 "is_configured": false, 00:13:09.784 "data_offset": 0, 00:13:09.784 "data_size": 0 00:13:09.784 }, 00:13:09.784 { 00:13:09.784 "name": "BaseBdev3", 00:13:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.785 "is_configured": false, 00:13:09.785 "data_offset": 0, 00:13:09.785 "data_size": 0 00:13:09.785 }, 00:13:09.785 { 00:13:09.785 "name": "BaseBdev4", 00:13:09.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.785 "is_configured": false, 00:13:09.785 "data_offset": 0, 00:13:09.785 "data_size": 0 00:13:09.785 } 00:13:09.785 ] 00:13:09.785 }' 00:13:09.785 12:05:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:09.785 12:05:16 -- common/autotest_common.sh@10 -- # set +x 00:13:10.042 12:05:17 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:10.300 [2024-07-25 12:05:17.507061] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.300 BaseBdev2 00:13:10.300 12:05:17 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev2 00:13:10.300 12:05:17 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev2 00:13:10.300 12:05:17 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:10.300 12:05:17 -- common/autotest_common.sh@889 -- # local i 00:13:10.300 12:05:17 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:10.300 12:05:17 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:10.300 12:05:17 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:10.560 12:05:17 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:10.560 [ 00:13:10.560 { 00:13:10.560 "name": "BaseBdev2", 00:13:10.560 "aliases": [ 00:13:10.560 "6afaf864-ee79-4b70-b096-30fd15e3f9c0" 00:13:10.560 ], 00:13:10.560 "product_name": "Malloc disk", 00:13:10.560 "block_size": 512, 00:13:10.560 "num_blocks": 65536, 00:13:10.560 "uuid": "6afaf864-ee79-4b70-b096-30fd15e3f9c0", 00:13:10.560 "assigned_rate_limits": { 00:13:10.560 "rw_ios_per_sec": 0, 00:13:10.560 "rw_mbytes_per_sec": 0, 00:13:10.560 "r_mbytes_per_sec": 0, 00:13:10.560 "w_mbytes_per_sec": 0 00:13:10.560 }, 00:13:10.560 "claimed": true, 00:13:10.560 "claim_type": "exclusive_write", 00:13:10.560 "zoned": false, 00:13:10.560 "supported_io_types": { 00:13:10.560 "read": true, 00:13:10.560 "write": true, 00:13:10.560 "unmap": true, 00:13:10.560 "write_zeroes": true, 00:13:10.560 "flush": true, 00:13:10.560 "reset": true, 00:13:10.560 "compare": false, 00:13:10.560 "compare_and_write": false, 00:13:10.560 "abort": true, 00:13:10.560 "nvme_admin": false, 00:13:10.560 "nvme_io": false 00:13:10.560 }, 00:13:10.560 "memory_domains": [ 00:13:10.560 { 00:13:10.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:10.560 "dma_device_type": 2 00:13:10.560 } 00:13:10.560 ], 00:13:10.560 "driver_specific": {} 00:13:10.560 } 00:13:10.560 ] 00:13:10.560 12:05:17 -- common/autotest_common.sh@895 -- # return 0 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.560 12:05:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.820 12:05:18 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:10.820 "name": "Existed_Raid", 00:13:10.820 "uuid": "be82a939-d925-4406-9a34-c95347723255", 00:13:10.820 "strip_size_kb": 0, 00:13:10.820 "state": "configuring", 00:13:10.820 "raid_level": "raid1", 00:13:10.820 "superblock": true, 00:13:10.820 "num_base_bdevs": 4, 00:13:10.820 "num_base_bdevs_discovered": 2, 00:13:10.820 "num_base_bdevs_operational": 4, 00:13:10.820 "base_bdevs_list": [ 00:13:10.820 { 00:13:10.820 "name": "BaseBdev1", 00:13:10.820 "uuid": "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2", 00:13:10.820 "is_configured": true, 00:13:10.820 "data_offset": 2048, 00:13:10.820 "data_size": 63488 00:13:10.820 }, 00:13:10.820 { 00:13:10.820 "name": "BaseBdev2", 00:13:10.820 "uuid": "6afaf864-ee79-4b70-b096-30fd15e3f9c0", 00:13:10.820 "is_configured": true, 00:13:10.820 "data_offset": 2048, 00:13:10.820 "data_size": 63488 00:13:10.820 }, 00:13:10.820 { 00:13:10.820 "name": "BaseBdev3", 00:13:10.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.820 "is_configured": false, 00:13:10.820 "data_offset": 0, 00:13:10.820 "data_size": 0 00:13:10.820 }, 00:13:10.820 { 00:13:10.820 "name": "BaseBdev4", 00:13:10.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.820 "is_configured": false, 00:13:10.820 "data_offset": 0, 00:13:10.820 "data_size": 0 00:13:10.820 } 00:13:10.820 ] 00:13:10.820 }' 00:13:10.820 12:05:18 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:10.820 12:05:18 -- common/autotest_common.sh@10 -- # set +x 00:13:11.386 12:05:18 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:11.386 [2024-07-25 12:05:18.644769] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:11.386 BaseBdev3 00:13:11.386 12:05:18 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev3 00:13:11.386 12:05:18 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev3 00:13:11.386 12:05:18 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:11.386 12:05:18 -- common/autotest_common.sh@889 -- # local i 00:13:11.386 12:05:18 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:11.386 12:05:18 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:11.386 12:05:18 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.644 12:05:18 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:11.903 [ 00:13:11.903 { 00:13:11.903 "name": "BaseBdev3", 00:13:11.903 "aliases": [ 00:13:11.903 "32f4b6b2-330c-474c-8693-a701bd1be193" 00:13:11.903 ], 00:13:11.903 "product_name": "Malloc disk", 00:13:11.903 "block_size": 512, 00:13:11.903 "num_blocks": 65536, 00:13:11.903 "uuid": "32f4b6b2-330c-474c-8693-a701bd1be193", 00:13:11.903 "assigned_rate_limits": { 00:13:11.903 "rw_ios_per_sec": 0, 00:13:11.903 "rw_mbytes_per_sec": 0, 00:13:11.903 "r_mbytes_per_sec": 0, 00:13:11.903 "w_mbytes_per_sec": 0 00:13:11.903 }, 00:13:11.903 "claimed": true, 00:13:11.903 "claim_type": "exclusive_write", 00:13:11.903 "zoned": false, 00:13:11.903 "supported_io_types": { 00:13:11.903 "read": true, 00:13:11.903 "write": true, 00:13:11.903 "unmap": true, 00:13:11.903 "write_zeroes": true, 00:13:11.903 "flush": true, 00:13:11.903 "reset": true, 00:13:11.903 "compare": false, 00:13:11.903 "compare_and_write": false, 00:13:11.903 "abort": true, 00:13:11.903 "nvme_admin": false, 00:13:11.903 "nvme_io": false 00:13:11.903 }, 00:13:11.903 "memory_domains": [ 00:13:11.903 { 00:13:11.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.903 "dma_device_type": 2 00:13:11.903 } 00:13:11.903 ], 00:13:11.903 "driver_specific": {} 00:13:11.903 } 00:13:11.903 ] 00:13:11.903 12:05:19 -- common/autotest_common.sh@895 -- # return 0 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:11.903 "name": "Existed_Raid", 00:13:11.903 "uuid": "be82a939-d925-4406-9a34-c95347723255", 00:13:11.903 "strip_size_kb": 0, 00:13:11.903 "state": "configuring", 00:13:11.903 "raid_level": "raid1", 00:13:11.903 "superblock": true, 00:13:11.903 "num_base_bdevs": 4, 00:13:11.903 "num_base_bdevs_discovered": 3, 00:13:11.903 "num_base_bdevs_operational": 4, 00:13:11.903 "base_bdevs_list": [ 00:13:11.903 { 00:13:11.903 "name": "BaseBdev1", 00:13:11.903 "uuid": "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2", 00:13:11.903 "is_configured": true, 00:13:11.903 "data_offset": 2048, 00:13:11.903 "data_size": 63488 00:13:11.903 }, 00:13:11.903 { 00:13:11.903 "name": "BaseBdev2", 00:13:11.903 "uuid": "6afaf864-ee79-4b70-b096-30fd15e3f9c0", 00:13:11.903 "is_configured": true, 00:13:11.903 "data_offset": 2048, 00:13:11.903 "data_size": 63488 00:13:11.903 }, 00:13:11.903 { 00:13:11.903 "name": "BaseBdev3", 00:13:11.903 "uuid": "32f4b6b2-330c-474c-8693-a701bd1be193", 00:13:11.903 "is_configured": true, 00:13:11.903 "data_offset": 2048, 00:13:11.903 "data_size": 63488 00:13:11.903 }, 00:13:11.903 { 00:13:11.903 "name": "BaseBdev4", 00:13:11.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.903 "is_configured": false, 00:13:11.903 "data_offset": 0, 00:13:11.903 "data_size": 0 00:13:11.903 } 00:13:11.903 ] 00:13:11.903 }' 00:13:11.903 12:05:19 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:11.903 12:05:19 -- common/autotest_common.sh@10 -- # set +x 00:13:12.470 12:05:19 -- bdev/bdev_raid.sh@256 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:12.729 [2024-07-25 12:05:19.814605] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:12.729 [2024-07-25 12:05:19.814741] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x2414130 00:13:12.729 [2024-07-25 12:05:19.814751] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:12.729 [2024-07-25 12:05:19.814875] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x226d030 00:13:12.729 [2024-07-25 12:05:19.814958] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2414130 00:13:12.729 [2024-07-25 12:05:19.814964] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2414130 00:13:12.729 [2024-07-25 12:05:19.815025] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.729 BaseBdev4 00:13:12.729 12:05:19 -- bdev/bdev_raid.sh@257 -- # waitforbdev BaseBdev4 00:13:12.729 12:05:19 -- common/autotest_common.sh@887 -- # local bdev_name=BaseBdev4 00:13:12.729 12:05:19 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:13:12.729 12:05:19 -- common/autotest_common.sh@889 -- # local i 00:13:12.729 12:05:19 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:13:12.729 12:05:19 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:13:12.729 12:05:19 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:12.729 12:05:20 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:12.988 [ 00:13:12.988 { 00:13:12.988 "name": "BaseBdev4", 00:13:12.988 "aliases": [ 00:13:12.988 "a6fd7abd-8def-44c2-856f-04836a674832" 00:13:12.988 ], 00:13:12.988 "product_name": "Malloc disk", 00:13:12.988 "block_size": 512, 00:13:12.988 "num_blocks": 65536, 00:13:12.988 "uuid": "a6fd7abd-8def-44c2-856f-04836a674832", 00:13:12.988 "assigned_rate_limits": { 00:13:12.988 "rw_ios_per_sec": 0, 00:13:12.988 "rw_mbytes_per_sec": 0, 00:13:12.988 "r_mbytes_per_sec": 0, 00:13:12.988 "w_mbytes_per_sec": 0 00:13:12.988 }, 00:13:12.988 "claimed": true, 00:13:12.988 "claim_type": "exclusive_write", 00:13:12.988 "zoned": false, 00:13:12.988 "supported_io_types": { 00:13:12.988 "read": true, 00:13:12.988 "write": true, 00:13:12.988 "unmap": true, 00:13:12.988 "write_zeroes": true, 00:13:12.988 "flush": true, 00:13:12.988 "reset": true, 00:13:12.988 "compare": false, 00:13:12.988 "compare_and_write": false, 00:13:12.988 "abort": true, 00:13:12.988 "nvme_admin": false, 00:13:12.988 "nvme_io": false 00:13:12.988 }, 00:13:12.988 "memory_domains": [ 00:13:12.988 { 00:13:12.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.988 "dma_device_type": 2 00:13:12.988 } 00:13:12.988 ], 00:13:12.988 "driver_specific": {} 00:13:12.988 } 00:13:12.988 ] 00:13:12.988 12:05:20 -- common/autotest_common.sh@895 -- # return 0 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@254 -- # (( i++ )) 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@254 -- # (( i < num_base_bdevs )) 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:12.988 12:05:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:13.246 12:05:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:13.246 "name": "Existed_Raid", 00:13:13.246 "uuid": "be82a939-d925-4406-9a34-c95347723255", 00:13:13.246 "strip_size_kb": 0, 00:13:13.246 "state": "online", 00:13:13.246 "raid_level": "raid1", 00:13:13.246 "superblock": true, 00:13:13.246 "num_base_bdevs": 4, 00:13:13.246 "num_base_bdevs_discovered": 4, 00:13:13.246 "num_base_bdevs_operational": 4, 00:13:13.246 "base_bdevs_list": [ 00:13:13.246 { 00:13:13.246 "name": "BaseBdev1", 00:13:13.246 "uuid": "1bac9759-55dc-4a35-b4e0-31d5d0d8ddf2", 00:13:13.246 "is_configured": true, 00:13:13.246 "data_offset": 2048, 00:13:13.246 "data_size": 63488 00:13:13.246 }, 00:13:13.246 { 00:13:13.246 "name": "BaseBdev2", 00:13:13.246 "uuid": "6afaf864-ee79-4b70-b096-30fd15e3f9c0", 00:13:13.246 "is_configured": true, 00:13:13.246 "data_offset": 2048, 00:13:13.246 "data_size": 63488 00:13:13.246 }, 00:13:13.246 { 00:13:13.246 "name": "BaseBdev3", 00:13:13.246 "uuid": "32f4b6b2-330c-474c-8693-a701bd1be193", 00:13:13.246 "is_configured": true, 00:13:13.246 "data_offset": 2048, 00:13:13.246 "data_size": 63488 00:13:13.246 }, 00:13:13.246 { 00:13:13.246 "name": "BaseBdev4", 00:13:13.246 "uuid": "a6fd7abd-8def-44c2-856f-04836a674832", 00:13:13.246 "is_configured": true, 00:13:13.246 "data_offset": 2048, 00:13:13.246 "data_size": 63488 00:13:13.246 } 00:13:13.246 ] 00:13:13.246 }' 00:13:13.246 12:05:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:13.246 12:05:20 -- common/autotest_common.sh@10 -- # set +x 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@262 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:13.814 [2024-07-25 12:05:20.965632] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@263 -- # local expected_state 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@264 -- # has_redundancy raid1 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@267 -- # expected_state=online 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@269 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=Existed_Raid 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:13.814 12:05:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:14.073 12:05:21 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:14.073 "name": "Existed_Raid", 00:13:14.073 "uuid": "be82a939-d925-4406-9a34-c95347723255", 00:13:14.073 "strip_size_kb": 0, 00:13:14.073 "state": "online", 00:13:14.073 "raid_level": "raid1", 00:13:14.073 "superblock": true, 00:13:14.073 "num_base_bdevs": 4, 00:13:14.073 "num_base_bdevs_discovered": 3, 00:13:14.073 "num_base_bdevs_operational": 3, 00:13:14.073 "base_bdevs_list": [ 00:13:14.073 { 00:13:14.073 "name": null, 00:13:14.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:14.073 "is_configured": false, 00:13:14.073 "data_offset": 2048, 00:13:14.073 "data_size": 63488 00:13:14.073 }, 00:13:14.073 { 00:13:14.073 "name": "BaseBdev2", 00:13:14.073 "uuid": "6afaf864-ee79-4b70-b096-30fd15e3f9c0", 00:13:14.073 "is_configured": true, 00:13:14.073 "data_offset": 2048, 00:13:14.073 "data_size": 63488 00:13:14.073 }, 00:13:14.073 { 00:13:14.073 "name": "BaseBdev3", 00:13:14.073 "uuid": "32f4b6b2-330c-474c-8693-a701bd1be193", 00:13:14.073 "is_configured": true, 00:13:14.073 "data_offset": 2048, 00:13:14.073 "data_size": 63488 00:13:14.073 }, 00:13:14.073 { 00:13:14.073 "name": "BaseBdev4", 00:13:14.073 "uuid": "a6fd7abd-8def-44c2-856f-04836a674832", 00:13:14.073 "is_configured": true, 00:13:14.073 "data_offset": 2048, 00:13:14.073 "data_size": 63488 00:13:14.073 } 00:13:14.073 ] 00:13:14.073 }' 00:13:14.073 12:05:21 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:14.073 12:05:21 -- common/autotest_common.sh@10 -- # set +x 00:13:14.332 12:05:21 -- bdev/bdev_raid.sh@273 -- # (( i = 1 )) 00:13:14.332 12:05:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:14.332 12:05:21 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.332 12:05:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:14.603 12:05:21 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:14.603 12:05:21 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.604 12:05:21 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:14.897 [2024-07-25 12:05:21.937073] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:14.897 12:05:21 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:14.897 12:05:21 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:14.897 12:05:21 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:14.897 12:05:21 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:14.897 12:05:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:14.897 12:05:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:14.897 12:05:22 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:15.155 [2024-07-25 12:05:22.281665] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:15.155 12:05:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:15.155 12:05:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:15.155 12:05:22 -- bdev/bdev_raid.sh@274 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.155 12:05:22 -- bdev/bdev_raid.sh@274 -- # jq -r '.[0]["name"]' 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@274 -- # raid_bdev=Existed_Raid 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@275 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@279 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:15.413 [2024-07-25 12:05:22.629923] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:15.413 [2024-07-25 12:05:22.629943] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:15.413 [2024-07-25 12:05:22.629970] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:15.413 [2024-07-25 12:05:22.641931] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:15.413 [2024-07-25 12:05:22.641951] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2414130 name Existed_Raid, state offline 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@273 -- # (( i++ )) 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@273 -- # (( i < num_base_bdevs )) 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@281 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.413 12:05:22 -- bdev/bdev_raid.sh@281 -- # jq -r '.[0]["name"] | select(.)' 00:13:15.672 12:05:22 -- bdev/bdev_raid.sh@281 -- # raid_bdev= 00:13:15.672 12:05:22 -- bdev/bdev_raid.sh@282 -- # '[' -n '' ']' 00:13:15.672 12:05:22 -- bdev/bdev_raid.sh@287 -- # killprocess 1250632 00:13:15.672 12:05:22 -- common/autotest_common.sh@926 -- # '[' -z 1250632 ']' 00:13:15.672 12:05:22 -- common/autotest_common.sh@930 -- # kill -0 1250632 00:13:15.672 12:05:22 -- common/autotest_common.sh@931 -- # uname 00:13:15.672 12:05:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:15.672 12:05:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1250632 00:13:15.672 12:05:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:15.672 12:05:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:15.672 12:05:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1250632' 00:13:15.672 killing process with pid 1250632 00:13:15.672 12:05:22 -- common/autotest_common.sh@945 -- # kill 1250632 00:13:15.672 [2024-07-25 12:05:22.862053] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:15.672 12:05:22 -- common/autotest_common.sh@950 -- # wait 1250632 00:13:15.672 [2024-07-25 12:05:22.862890] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@289 -- # return 0 00:13:15.930 00:13:15.930 real 0m10.381s 00:13:15.930 user 0m18.368s 00:13:15.930 sys 0m2.017s 00:13:15.930 12:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:15.930 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 ************************************ 00:13:15.930 END TEST raid_state_function_test_sb 00:13:15.930 ************************************ 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@729 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:15.930 12:05:23 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:13:15.930 12:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:15.930 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:15.930 ************************************ 00:13:15.930 START TEST raid_superblock_test 00:13:15.930 ************************************ 00:13:15.930 12:05:23 -- common/autotest_common.sh@1104 -- # raid_superblock_test raid1 4 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@338 -- # local raid_level=raid1 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@339 -- # local num_base_bdevs=4 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@340 -- # base_bdevs_malloc=() 00:13:15.930 12:05:23 -- bdev/bdev_raid.sh@340 -- # local base_bdevs_malloc 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@341 -- # base_bdevs_pt=() 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@341 -- # local base_bdevs_pt 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@342 -- # base_bdevs_pt_uuid=() 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@342 -- # local base_bdevs_pt_uuid 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@343 -- # local raid_bdev_name=raid_bdev1 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@344 -- # local strip_size 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@345 -- # local strip_size_create_arg 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@346 -- # local raid_bdev_uuid 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@347 -- # local raid_bdev 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@349 -- # '[' raid1 '!=' raid1 ']' 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@353 -- # strip_size=0 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@357 -- # raid_pid=1252274 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@358 -- # waitforlisten 1252274 /var/tmp/spdk-raid.sock 00:13:15.931 12:05:23 -- bdev/bdev_raid.sh@356 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:15.931 12:05:23 -- common/autotest_common.sh@819 -- # '[' -z 1252274 ']' 00:13:15.931 12:05:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:15.931 12:05:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:15.931 12:05:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:15.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:15.931 12:05:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:15.931 12:05:23 -- common/autotest_common.sh@10 -- # set +x 00:13:15.931 [2024-07-25 12:05:23.171263] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:15.931 [2024-07-25 12:05:23.171337] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1252274 ] 00:13:16.189 [2024-07-25 12:05:23.260270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:16.189 [2024-07-25 12:05:23.346749] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:16.189 [2024-07-25 12:05:23.406466] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.189 [2024-07-25 12:05:23.406497] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:16.756 12:05:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:16.756 12:05:23 -- common/autotest_common.sh@852 -- # return 0 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@361 -- # (( i = 1 )) 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc1 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt1 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:16.756 12:05:23 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:16.757 12:05:23 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:17.014 malloc1 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:17.015 [2024-07-25 12:05:24.252835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:17.015 [2024-07-25 12:05:24.252879] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.015 [2024-07-25 12:05:24.252897] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac28d0 00:13:17.015 [2024-07-25 12:05:24.252911] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.015 [2024-07-25 12:05:24.254256] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.015 [2024-07-25 12:05:24.254290] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:17.015 pt1 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc2 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt2 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.015 12:05:24 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:17.273 malloc2 00:13:17.273 12:05:24 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:17.532 [2024-07-25 12:05:24.589649] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:17.532 [2024-07-25 12:05:24.589688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.532 [2024-07-25 12:05:24.589701] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6a1a0 00:13:17.532 [2024-07-25 12:05:24.589710] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.532 [2024-07-25 12:05:24.590903] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.532 [2024-07-25 12:05:24.590926] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:17.532 pt2 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc3 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt3 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:17.532 malloc3 00:13:17.532 12:05:24 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:17.791 [2024-07-25 12:05:24.902562] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:17.791 [2024-07-25 12:05:24.902599] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:17.791 [2024-07-25 12:05:24.902630] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6a700 00:13:17.791 [2024-07-25 12:05:24.902638] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:17.791 [2024-07-25 12:05:24.903783] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:17.791 [2024-07-25 12:05:24.903806] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:17.791 pt3 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@362 -- # local bdev_malloc=malloc4 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@363 -- # local bdev_pt=pt4 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@364 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@366 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@367 -- # base_bdevs_pt+=($bdev_pt) 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@368 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:17.791 12:05:24 -- bdev/bdev_raid.sh@370 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:17.791 malloc4 00:13:17.791 12:05:25 -- bdev/bdev_raid.sh@371 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:18.050 [2024-07-25 12:05:25.220314] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:18.050 [2024-07-25 12:05:25.220355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:18.050 [2024-07-25 12:05:25.220386] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6ce00 00:13:18.050 [2024-07-25 12:05:25.220395] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:18.050 [2024-07-25 12:05:25.221630] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:18.050 [2024-07-25 12:05:25.221653] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:18.050 pt4 00:13:18.050 12:05:25 -- bdev/bdev_raid.sh@361 -- # (( i++ )) 00:13:18.050 12:05:25 -- bdev/bdev_raid.sh@361 -- # (( i <= num_base_bdevs )) 00:13:18.050 12:05:25 -- bdev/bdev_raid.sh@375 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:18.309 [2024-07-25 12:05:25.380748] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:18.309 [2024-07-25 12:05:25.381712] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:18.309 [2024-07-25 12:05:25.381750] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:18.309 [2024-07-25 12:05:25.381775] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:18.309 [2024-07-25 12:05:25.381905] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c689f0 00:13:18.309 [2024-07-25 12:05:25.381913] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:18.309 [2024-07-25 12:05:25.382048] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c70580 00:13:18.309 [2024-07-25 12:05:25.382146] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c689f0 00:13:18.309 [2024-07-25 12:05:25.382152] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c689f0 00:13:18.309 [2024-07-25 12:05:25.382218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@376 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:18.309 "name": "raid_bdev1", 00:13:18.309 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:18.309 "strip_size_kb": 0, 00:13:18.309 "state": "online", 00:13:18.309 "raid_level": "raid1", 00:13:18.309 "superblock": true, 00:13:18.309 "num_base_bdevs": 4, 00:13:18.309 "num_base_bdevs_discovered": 4, 00:13:18.309 "num_base_bdevs_operational": 4, 00:13:18.309 "base_bdevs_list": [ 00:13:18.309 { 00:13:18.309 "name": "pt1", 00:13:18.309 "uuid": "d0995c0b-5203-558c-8f5a-ba27d2cd0a5a", 00:13:18.309 "is_configured": true, 00:13:18.309 "data_offset": 2048, 00:13:18.309 "data_size": 63488 00:13:18.309 }, 00:13:18.309 { 00:13:18.309 "name": "pt2", 00:13:18.309 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:18.309 "is_configured": true, 00:13:18.309 "data_offset": 2048, 00:13:18.309 "data_size": 63488 00:13:18.309 }, 00:13:18.309 { 00:13:18.309 "name": "pt3", 00:13:18.309 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:18.309 "is_configured": true, 00:13:18.309 "data_offset": 2048, 00:13:18.309 "data_size": 63488 00:13:18.309 }, 00:13:18.309 { 00:13:18.309 "name": "pt4", 00:13:18.309 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:18.309 "is_configured": true, 00:13:18.309 "data_offset": 2048, 00:13:18.309 "data_size": 63488 00:13:18.309 } 00:13:18.309 ] 00:13:18.309 }' 00:13:18.309 12:05:25 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:18.309 12:05:25 -- common/autotest_common.sh@10 -- # set +x 00:13:18.877 12:05:26 -- bdev/bdev_raid.sh@379 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:18.877 12:05:26 -- bdev/bdev_raid.sh@379 -- # jq -r '.[] | .uuid' 00:13:18.877 [2024-07-25 12:05:26.146845] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:18.877 12:05:26 -- bdev/bdev_raid.sh@379 -- # raid_bdev_uuid=4d800771-ed4f-4308-a159-cceca995056f 00:13:18.877 12:05:26 -- bdev/bdev_raid.sh@380 -- # '[' -z 4d800771-ed4f-4308-a159-cceca995056f ']' 00:13:18.877 12:05:26 -- bdev/bdev_raid.sh@385 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:19.136 [2024-07-25 12:05:26.315111] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:19.136 [2024-07-25 12:05:26.315130] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:19.136 [2024-07-25 12:05:26.315166] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:19.136 [2024-07-25 12:05:26.315222] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:19.136 [2024-07-25 12:05:26.315229] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c689f0 name raid_bdev1, state offline 00:13:19.136 12:05:26 -- bdev/bdev_raid.sh@386 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.136 12:05:26 -- bdev/bdev_raid.sh@386 -- # jq -r '.[]' 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@386 -- # raid_bdev= 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@387 -- # '[' -n '' ']' 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:19.394 12:05:26 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:19.653 12:05:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:19.653 12:05:26 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:19.653 12:05:26 -- bdev/bdev_raid.sh@392 -- # for i in "${base_bdevs_pt[@]}" 00:13:19.653 12:05:26 -- bdev/bdev_raid.sh@393 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:19.911 12:05:27 -- bdev/bdev_raid.sh@395 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:19.911 12:05:27 -- bdev/bdev_raid.sh@395 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:20.169 12:05:27 -- bdev/bdev_raid.sh@395 -- # '[' false == true ']' 00:13:20.169 12:05:27 -- bdev/bdev_raid.sh@401 -- # NOT /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:20.169 12:05:27 -- common/autotest_common.sh@640 -- # local es=0 00:13:20.169 12:05:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:20.169 12:05:27 -- common/autotest_common.sh@628 -- # local arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:20.169 12:05:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:20.169 12:05:27 -- common/autotest_common.sh@632 -- # type -t /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:20.169 12:05:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:20.169 12:05:27 -- common/autotest_common.sh@634 -- # type -P /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:20.169 12:05:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:13:20.169 12:05:27 -- common/autotest_common.sh@634 -- # arg=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:13:20.169 12:05:27 -- common/autotest_common.sh@634 -- # [[ -x /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py ]] 00:13:20.169 12:05:27 -- common/autotest_common.sh@643 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:20.169 [2024-07-25 12:05:27.417946] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:20.169 [2024-07-25 12:05:27.418953] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:20.169 [2024-07-25 12:05:27.418984] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:20.169 [2024-07-25 12:05:27.419005] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:20.169 [2024-07-25 12:05:27.419036] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc1 00:13:20.169 [2024-07-25 12:05:27.419064] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc2 00:13:20.169 [2024-07-25 12:05:27.419095] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc3 00:13:20.169 [2024-07-25 12:05:27.419109] bdev_raid.c:2847:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Existing raid superblock found on bdev malloc4 00:13:20.169 [2024-07-25 12:05:27.419121] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:20.169 [2024-07-25 12:05:27.419128] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ac31c0 name raid_bdev1, state configuring 00:13:20.169 request: 00:13:20.169 { 00:13:20.169 "name": "raid_bdev1", 00:13:20.169 "raid_level": "raid1", 00:13:20.169 "base_bdevs": [ 00:13:20.169 "malloc1", 00:13:20.169 "malloc2", 00:13:20.169 "malloc3", 00:13:20.169 "malloc4" 00:13:20.169 ], 00:13:20.169 "superblock": false, 00:13:20.169 "method": "bdev_raid_create", 00:13:20.169 "req_id": 1 00:13:20.169 } 00:13:20.169 Got JSON-RPC error response 00:13:20.169 response: 00:13:20.169 { 00:13:20.169 "code": -17, 00:13:20.169 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:20.169 } 00:13:20.169 12:05:27 -- common/autotest_common.sh@643 -- # es=1 00:13:20.169 12:05:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:13:20.169 12:05:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:13:20.169 12:05:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:13:20.169 12:05:27 -- bdev/bdev_raid.sh@403 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.169 12:05:27 -- bdev/bdev_raid.sh@403 -- # jq -r '.[]' 00:13:20.428 12:05:27 -- bdev/bdev_raid.sh@403 -- # raid_bdev= 00:13:20.428 12:05:27 -- bdev/bdev_raid.sh@404 -- # '[' -n '' ']' 00:13:20.428 12:05:27 -- bdev/bdev_raid.sh@409 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:20.428 [2024-07-25 12:05:27.730715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:20.428 [2024-07-25 12:05:27.730751] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:20.428 [2024-07-25 12:05:27.730767] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6b7c0 00:13:20.428 [2024-07-25 12:05:27.730775] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:20.428 [2024-07-25 12:05:27.732032] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:20.428 [2024-07-25 12:05:27.732055] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:20.428 [2024-07-25 12:05:27.732110] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:20.428 [2024-07-25 12:05:27.732128] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:20.428 pt1 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@412 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:20.687 "name": "raid_bdev1", 00:13:20.687 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:20.687 "strip_size_kb": 0, 00:13:20.687 "state": "configuring", 00:13:20.687 "raid_level": "raid1", 00:13:20.687 "superblock": true, 00:13:20.687 "num_base_bdevs": 4, 00:13:20.687 "num_base_bdevs_discovered": 1, 00:13:20.687 "num_base_bdevs_operational": 4, 00:13:20.687 "base_bdevs_list": [ 00:13:20.687 { 00:13:20.687 "name": "pt1", 00:13:20.687 "uuid": "d0995c0b-5203-558c-8f5a-ba27d2cd0a5a", 00:13:20.687 "is_configured": true, 00:13:20.687 "data_offset": 2048, 00:13:20.687 "data_size": 63488 00:13:20.687 }, 00:13:20.687 { 00:13:20.687 "name": null, 00:13:20.687 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:20.687 "is_configured": false, 00:13:20.687 "data_offset": 2048, 00:13:20.687 "data_size": 63488 00:13:20.687 }, 00:13:20.687 { 00:13:20.687 "name": null, 00:13:20.687 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:20.687 "is_configured": false, 00:13:20.687 "data_offset": 2048, 00:13:20.687 "data_size": 63488 00:13:20.687 }, 00:13:20.687 { 00:13:20.687 "name": null, 00:13:20.687 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:20.687 "is_configured": false, 00:13:20.687 "data_offset": 2048, 00:13:20.687 "data_size": 63488 00:13:20.687 } 00:13:20.687 ] 00:13:20.687 }' 00:13:20.687 12:05:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:20.687 12:05:27 -- common/autotest_common.sh@10 -- # set +x 00:13:21.254 12:05:28 -- bdev/bdev_raid.sh@414 -- # '[' 4 -gt 2 ']' 00:13:21.254 12:05:28 -- bdev/bdev_raid.sh@416 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:21.254 [2024-07-25 12:05:28.540803] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:21.254 [2024-07-25 12:05:28.540842] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:21.254 [2024-07-25 12:05:28.540871] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6ebe0 00:13:21.254 [2024-07-25 12:05:28.540880] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:21.254 [2024-07-25 12:05:28.541121] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:21.254 [2024-07-25 12:05:28.541132] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:21.254 [2024-07-25 12:05:28.541178] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:21.254 [2024-07-25 12:05:28.541190] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:21.254 pt2 00:13:21.254 12:05:28 -- bdev/bdev_raid.sh@417 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:21.512 [2024-07-25 12:05:28.721281] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@418 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:21.513 12:05:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:21.771 12:05:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:21.771 "name": "raid_bdev1", 00:13:21.771 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:21.771 "strip_size_kb": 0, 00:13:21.771 "state": "configuring", 00:13:21.771 "raid_level": "raid1", 00:13:21.771 "superblock": true, 00:13:21.771 "num_base_bdevs": 4, 00:13:21.771 "num_base_bdevs_discovered": 1, 00:13:21.771 "num_base_bdevs_operational": 4, 00:13:21.771 "base_bdevs_list": [ 00:13:21.771 { 00:13:21.771 "name": "pt1", 00:13:21.771 "uuid": "d0995c0b-5203-558c-8f5a-ba27d2cd0a5a", 00:13:21.771 "is_configured": true, 00:13:21.771 "data_offset": 2048, 00:13:21.771 "data_size": 63488 00:13:21.771 }, 00:13:21.771 { 00:13:21.771 "name": null, 00:13:21.771 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:21.771 "is_configured": false, 00:13:21.771 "data_offset": 2048, 00:13:21.771 "data_size": 63488 00:13:21.771 }, 00:13:21.771 { 00:13:21.771 "name": null, 00:13:21.771 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:21.771 "is_configured": false, 00:13:21.771 "data_offset": 2048, 00:13:21.771 "data_size": 63488 00:13:21.771 }, 00:13:21.771 { 00:13:21.771 "name": null, 00:13:21.771 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:21.771 "is_configured": false, 00:13:21.771 "data_offset": 2048, 00:13:21.771 "data_size": 63488 00:13:21.771 } 00:13:21.771 ] 00:13:21.771 }' 00:13:21.771 12:05:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:21.771 12:05:28 -- common/autotest_common.sh@10 -- # set +x 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i = 1 )) 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:22.338 [2024-07-25 12:05:29.547502] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:22.338 [2024-07-25 12:05:29.547542] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.338 [2024-07-25 12:05:29.547572] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6ee50 00:13:22.338 [2024-07-25 12:05:29.547581] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.338 [2024-07-25 12:05:29.547819] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.338 [2024-07-25 12:05:29.547830] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:22.338 [2024-07-25 12:05:29.547871] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:22.338 [2024-07-25 12:05:29.547884] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:22.338 pt2 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:22.338 12:05:29 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:22.597 [2024-07-25 12:05:29.723957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:22.597 [2024-07-25 12:05:29.723984] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.597 [2024-07-25 12:05:29.723997] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6e1b0 00:13:22.597 [2024-07-25 12:05:29.724005] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.597 [2024-07-25 12:05:29.724216] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.597 [2024-07-25 12:05:29.724240] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:22.597 [2024-07-25 12:05:29.724286] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:22.597 [2024-07-25 12:05:29.724299] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:22.597 pt3 00:13:22.597 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:22.597 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:22.597 12:05:29 -- bdev/bdev_raid.sh@423 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:22.597 [2024-07-25 12:05:29.904421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:22.597 [2024-07-25 12:05:29.904444] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.597 [2024-07-25 12:05:29.904460] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c69340 00:13:22.597 [2024-07-25 12:05:29.904468] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.597 [2024-07-25 12:05:29.904665] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.597 [2024-07-25 12:05:29.904677] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:22.597 [2024-07-25 12:05:29.904711] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:13:22.597 [2024-07-25 12:05:29.904721] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:22.597 [2024-07-25 12:05:29.904799] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c6f320 00:13:22.597 [2024-07-25 12:05:29.904806] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:22.597 [2024-07-25 12:05:29.904915] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c60bb0 00:13:22.597 [2024-07-25 12:05:29.905005] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c6f320 00:13:22.597 [2024-07-25 12:05:29.905011] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c6f320 00:13:22.597 [2024-07-25 12:05:29.905075] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.856 pt4 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i++ )) 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@422 -- # (( i < num_base_bdevs )) 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@427 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.856 12:05:29 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.856 12:05:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:22.856 "name": "raid_bdev1", 00:13:22.856 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:22.856 "strip_size_kb": 0, 00:13:22.856 "state": "online", 00:13:22.856 "raid_level": "raid1", 00:13:22.856 "superblock": true, 00:13:22.856 "num_base_bdevs": 4, 00:13:22.856 "num_base_bdevs_discovered": 4, 00:13:22.856 "num_base_bdevs_operational": 4, 00:13:22.856 "base_bdevs_list": [ 00:13:22.856 { 00:13:22.856 "name": "pt1", 00:13:22.856 "uuid": "d0995c0b-5203-558c-8f5a-ba27d2cd0a5a", 00:13:22.856 "is_configured": true, 00:13:22.856 "data_offset": 2048, 00:13:22.856 "data_size": 63488 00:13:22.856 }, 00:13:22.856 { 00:13:22.856 "name": "pt2", 00:13:22.856 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:22.856 "is_configured": true, 00:13:22.856 "data_offset": 2048, 00:13:22.856 "data_size": 63488 00:13:22.856 }, 00:13:22.856 { 00:13:22.856 "name": "pt3", 00:13:22.856 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:22.856 "is_configured": true, 00:13:22.856 "data_offset": 2048, 00:13:22.856 "data_size": 63488 00:13:22.856 }, 00:13:22.856 { 00:13:22.856 "name": "pt4", 00:13:22.856 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:22.856 "is_configured": true, 00:13:22.856 "data_offset": 2048, 00:13:22.856 "data_size": 63488 00:13:22.856 } 00:13:22.856 ] 00:13:22.856 }' 00:13:22.856 12:05:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:22.856 12:05:30 -- common/autotest_common.sh@10 -- # set +x 00:13:23.423 12:05:30 -- bdev/bdev_raid.sh@430 -- # jq -r '.[] | .uuid' 00:13:23.423 12:05:30 -- bdev/bdev_raid.sh@430 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:23.682 [2024-07-25 12:05:30.758808] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@430 -- # '[' 4d800771-ed4f-4308-a159-cceca995056f '!=' 4d800771-ed4f-4308-a159-cceca995056f ']' 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@434 -- # has_redundancy raid1 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@195 -- # case $1 in 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@196 -- # return 0 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@436 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:23.682 [2024-07-25 12:05:30.923061] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@439 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:23.682 12:05:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:23.940 12:05:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:23.940 "name": "raid_bdev1", 00:13:23.941 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:23.941 "strip_size_kb": 0, 00:13:23.941 "state": "online", 00:13:23.941 "raid_level": "raid1", 00:13:23.941 "superblock": true, 00:13:23.941 "num_base_bdevs": 4, 00:13:23.941 "num_base_bdevs_discovered": 3, 00:13:23.941 "num_base_bdevs_operational": 3, 00:13:23.941 "base_bdevs_list": [ 00:13:23.941 { 00:13:23.941 "name": null, 00:13:23.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:23.941 "is_configured": false, 00:13:23.941 "data_offset": 2048, 00:13:23.941 "data_size": 63488 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "name": "pt2", 00:13:23.941 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:23.941 "is_configured": true, 00:13:23.941 "data_offset": 2048, 00:13:23.941 "data_size": 63488 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "name": "pt3", 00:13:23.941 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:23.941 "is_configured": true, 00:13:23.941 "data_offset": 2048, 00:13:23.941 "data_size": 63488 00:13:23.941 }, 00:13:23.941 { 00:13:23.941 "name": "pt4", 00:13:23.941 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:23.941 "is_configured": true, 00:13:23.941 "data_offset": 2048, 00:13:23.941 "data_size": 63488 00:13:23.941 } 00:13:23.941 ] 00:13:23.941 }' 00:13:23.941 12:05:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:23.941 12:05:31 -- common/autotest_common.sh@10 -- # set +x 00:13:24.508 12:05:31 -- bdev/bdev_raid.sh@442 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:24.508 [2024-07-25 12:05:31.729117] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:24.508 [2024-07-25 12:05:31.729139] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.508 [2024-07-25 12:05:31.729179] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.508 [2024-07-25 12:05:31.729230] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:24.508 [2024-07-25 12:05:31.729237] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c6f320 name raid_bdev1, state offline 00:13:24.508 12:05:31 -- bdev/bdev_raid.sh@443 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.508 12:05:31 -- bdev/bdev_raid.sh@443 -- # jq -r '.[]' 00:13:24.767 12:05:31 -- bdev/bdev_raid.sh@443 -- # raid_bdev= 00:13:24.767 12:05:31 -- bdev/bdev_raid.sh@444 -- # '[' -n '' ']' 00:13:24.767 12:05:31 -- bdev/bdev_raid.sh@449 -- # (( i = 1 )) 00:13:24.767 12:05:31 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:24.767 12:05:31 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:25.025 12:05:32 -- bdev/bdev_raid.sh@450 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:25.284 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i++ )) 00:13:25.284 12:05:32 -- bdev/bdev_raid.sh@449 -- # (( i < num_base_bdevs )) 00:13:25.284 12:05:32 -- bdev/bdev_raid.sh@454 -- # (( i = 1 )) 00:13:25.284 12:05:32 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:13:25.284 12:05:32 -- bdev/bdev_raid.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:25.543 [2024-07-25 12:05:32.595318] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:25.543 [2024-07-25 12:05:32.595355] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:25.543 [2024-07-25 12:05:32.595370] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c699a0 00:13:25.543 [2024-07-25 12:05:32.595379] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:25.543 [2024-07-25 12:05:32.596572] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:25.543 [2024-07-25 12:05:32.596594] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:25.543 [2024-07-25 12:05:32.596640] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:25.543 [2024-07-25 12:05:32.596659] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:25.543 pt2 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:25.543 "name": "raid_bdev1", 00:13:25.543 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:25.543 "strip_size_kb": 0, 00:13:25.543 "state": "configuring", 00:13:25.543 "raid_level": "raid1", 00:13:25.543 "superblock": true, 00:13:25.543 "num_base_bdevs": 4, 00:13:25.543 "num_base_bdevs_discovered": 1, 00:13:25.543 "num_base_bdevs_operational": 3, 00:13:25.543 "base_bdevs_list": [ 00:13:25.543 { 00:13:25.543 "name": null, 00:13:25.543 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.543 "is_configured": false, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": "pt2", 00:13:25.543 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:25.543 "is_configured": true, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": null, 00:13:25.543 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:25.543 "is_configured": false, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 }, 00:13:25.543 { 00:13:25.543 "name": null, 00:13:25.543 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:25.543 "is_configured": false, 00:13:25.543 "data_offset": 2048, 00:13:25.543 "data_size": 63488 00:13:25.543 } 00:13:25.543 ] 00:13:25.543 }' 00:13:25.543 12:05:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:25.543 12:05:32 -- common/autotest_common.sh@10 -- # set +x 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@455 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:26.111 [2024-07-25 12:05:33.393374] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:26.111 [2024-07-25 12:05:33.393419] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.111 [2024-07-25 12:05:33.393434] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6e4f0 00:13:26.111 [2024-07-25 12:05:33.393447] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.111 [2024-07-25 12:05:33.393697] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.111 [2024-07-25 12:05:33.393709] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:26.111 [2024-07-25 12:05:33.393753] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:26.111 [2024-07-25 12:05:33.393766] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:26.111 pt3 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@458 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.111 12:05:33 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:26.369 12:05:33 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:26.369 "name": "raid_bdev1", 00:13:26.369 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:26.369 "strip_size_kb": 0, 00:13:26.369 "state": "configuring", 00:13:26.369 "raid_level": "raid1", 00:13:26.369 "superblock": true, 00:13:26.369 "num_base_bdevs": 4, 00:13:26.369 "num_base_bdevs_discovered": 2, 00:13:26.369 "num_base_bdevs_operational": 3, 00:13:26.369 "base_bdevs_list": [ 00:13:26.369 { 00:13:26.369 "name": null, 00:13:26.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.369 "is_configured": false, 00:13:26.369 "data_offset": 2048, 00:13:26.369 "data_size": 63488 00:13:26.369 }, 00:13:26.369 { 00:13:26.369 "name": "pt2", 00:13:26.369 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:26.369 "is_configured": true, 00:13:26.369 "data_offset": 2048, 00:13:26.369 "data_size": 63488 00:13:26.369 }, 00:13:26.369 { 00:13:26.369 "name": "pt3", 00:13:26.369 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:26.369 "is_configured": true, 00:13:26.369 "data_offset": 2048, 00:13:26.369 "data_size": 63488 00:13:26.369 }, 00:13:26.369 { 00:13:26.369 "name": null, 00:13:26.369 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:26.369 "is_configured": false, 00:13:26.369 "data_offset": 2048, 00:13:26.369 "data_size": 63488 00:13:26.369 } 00:13:26.369 ] 00:13:26.369 }' 00:13:26.369 12:05:33 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:26.369 12:05:33 -- common/autotest_common.sh@10 -- # set +x 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@454 -- # (( i++ )) 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@454 -- # (( i < num_base_bdevs - 1 )) 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@462 -- # i=3 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@463 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:26.934 [2024-07-25 12:05:34.199466] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:26.934 [2024-07-25 12:05:34.199508] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:26.934 [2024-07-25 12:05:34.199524] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c70070 00:13:26.934 [2024-07-25 12:05:34.199533] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:26.934 [2024-07-25 12:05:34.199786] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:26.934 [2024-07-25 12:05:34.199797] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:26.934 [2024-07-25 12:05:34.199842] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:13:26.934 [2024-07-25 12:05:34.199854] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:26.934 [2024-07-25 12:05:34.199928] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c60640 00:13:26.934 [2024-07-25 12:05:34.199935] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:26.934 [2024-07-25 12:05:34.200057] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ac3e00 00:13:26.934 [2024-07-25 12:05:34.200146] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c60640 00:13:26.934 [2024-07-25 12:05:34.200152] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c60640 00:13:26.934 [2024-07-25 12:05:34.200218] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:26.934 pt4 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@466 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.934 12:05:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.193 12:05:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:27.193 "name": "raid_bdev1", 00:13:27.193 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:27.193 "strip_size_kb": 0, 00:13:27.193 "state": "online", 00:13:27.193 "raid_level": "raid1", 00:13:27.193 "superblock": true, 00:13:27.193 "num_base_bdevs": 4, 00:13:27.193 "num_base_bdevs_discovered": 3, 00:13:27.193 "num_base_bdevs_operational": 3, 00:13:27.193 "base_bdevs_list": [ 00:13:27.193 { 00:13:27.193 "name": null, 00:13:27.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.193 "is_configured": false, 00:13:27.193 "data_offset": 2048, 00:13:27.193 "data_size": 63488 00:13:27.193 }, 00:13:27.193 { 00:13:27.193 "name": "pt2", 00:13:27.193 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:27.193 "is_configured": true, 00:13:27.193 "data_offset": 2048, 00:13:27.193 "data_size": 63488 00:13:27.193 }, 00:13:27.193 { 00:13:27.193 "name": "pt3", 00:13:27.193 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:27.193 "is_configured": true, 00:13:27.193 "data_offset": 2048, 00:13:27.193 "data_size": 63488 00:13:27.193 }, 00:13:27.193 { 00:13:27.193 "name": "pt4", 00:13:27.193 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:27.193 "is_configured": true, 00:13:27.193 "data_offset": 2048, 00:13:27.193 "data_size": 63488 00:13:27.193 } 00:13:27.193 ] 00:13:27.193 }' 00:13:27.193 12:05:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:27.193 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:13:27.759 12:05:34 -- bdev/bdev_raid.sh@468 -- # '[' 4 -gt 2 ']' 00:13:27.759 12:05:34 -- bdev/bdev_raid.sh@470 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:27.759 [2024-07-25 12:05:35.013548] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:27.759 [2024-07-25 12:05:35.013571] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:27.759 [2024-07-25 12:05:35.013627] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:27.759 [2024-07-25 12:05:35.013675] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:27.759 [2024-07-25 12:05:35.013683] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c60640 name raid_bdev1, state offline 00:13:27.759 12:05:35 -- bdev/bdev_raid.sh@471 -- # jq -r '.[]' 00:13:27.759 12:05:35 -- bdev/bdev_raid.sh@471 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.017 12:05:35 -- bdev/bdev_raid.sh@471 -- # raid_bdev= 00:13:28.017 12:05:35 -- bdev/bdev_raid.sh@472 -- # '[' -n '' ']' 00:13:28.017 12:05:35 -- bdev/bdev_raid.sh@478 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:28.276 [2024-07-25 12:05:35.370457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:28.276 [2024-07-25 12:05:35.370495] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:28.276 [2024-07-25 12:05:35.370525] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c6f910 00:13:28.276 [2024-07-25 12:05:35.370539] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:28.276 [2024-07-25 12:05:35.371705] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:28.276 [2024-07-25 12:05:35.371726] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:28.276 [2024-07-25 12:05:35.371771] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt1 00:13:28.276 [2024-07-25 12:05:35.371788] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:28.276 pt1 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@481 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:28.276 "name": "raid_bdev1", 00:13:28.276 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:28.276 "strip_size_kb": 0, 00:13:28.276 "state": "configuring", 00:13:28.276 "raid_level": "raid1", 00:13:28.276 "superblock": true, 00:13:28.276 "num_base_bdevs": 4, 00:13:28.276 "num_base_bdevs_discovered": 1, 00:13:28.276 "num_base_bdevs_operational": 4, 00:13:28.276 "base_bdevs_list": [ 00:13:28.276 { 00:13:28.276 "name": "pt1", 00:13:28.276 "uuid": "d0995c0b-5203-558c-8f5a-ba27d2cd0a5a", 00:13:28.276 "is_configured": true, 00:13:28.276 "data_offset": 2048, 00:13:28.276 "data_size": 63488 00:13:28.276 }, 00:13:28.276 { 00:13:28.276 "name": null, 00:13:28.276 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:28.276 "is_configured": false, 00:13:28.276 "data_offset": 2048, 00:13:28.276 "data_size": 63488 00:13:28.276 }, 00:13:28.276 { 00:13:28.276 "name": null, 00:13:28.276 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:28.276 "is_configured": false, 00:13:28.276 "data_offset": 2048, 00:13:28.276 "data_size": 63488 00:13:28.276 }, 00:13:28.276 { 00:13:28.276 "name": null, 00:13:28.276 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:28.276 "is_configured": false, 00:13:28.276 "data_offset": 2048, 00:13:28.276 "data_size": 63488 00:13:28.276 } 00:13:28.276 ] 00:13:28.276 }' 00:13:28.276 12:05:35 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:28.276 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:13:28.858 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i = 1 )) 00:13:28.858 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:28.858 12:05:36 -- bdev/bdev_raid.sh@485 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@485 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:29.137 12:05:36 -- bdev/bdev_raid.sh@485 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:29.396 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i++ )) 00:13:29.396 12:05:36 -- bdev/bdev_raid.sh@484 -- # (( i < num_base_bdevs )) 00:13:29.396 12:05:36 -- bdev/bdev_raid.sh@489 -- # i=3 00:13:29.396 12:05:36 -- bdev/bdev_raid.sh@490 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:29.396 [2024-07-25 12:05:36.693954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:29.396 [2024-07-25 12:05:36.693988] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:29.396 [2024-07-25 12:05:36.694018] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c69980 00:13:29.396 [2024-07-25 12:05:36.694032] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:29.396 [2024-07-25 12:05:36.694294] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:29.396 [2024-07-25 12:05:36.694307] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:29.396 [2024-07-25 12:05:36.694352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt4 00:13:29.396 [2024-07-25 12:05:36.694360] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt4 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:29.396 [2024-07-25 12:05:36.694367] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:29.396 [2024-07-25 12:05:36.694380] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1ac1a70 name raid_bdev1, state configuring 00:13:29.396 [2024-07-25 12:05:36.694402] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:29.396 pt4 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@494 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@118 -- # local expected_state=configuring 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:29.655 "name": "raid_bdev1", 00:13:29.655 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:29.655 "strip_size_kb": 0, 00:13:29.655 "state": "configuring", 00:13:29.655 "raid_level": "raid1", 00:13:29.655 "superblock": true, 00:13:29.655 "num_base_bdevs": 4, 00:13:29.655 "num_base_bdevs_discovered": 1, 00:13:29.655 "num_base_bdevs_operational": 3, 00:13:29.655 "base_bdevs_list": [ 00:13:29.655 { 00:13:29.655 "name": null, 00:13:29.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.655 "is_configured": false, 00:13:29.655 "data_offset": 2048, 00:13:29.655 "data_size": 63488 00:13:29.655 }, 00:13:29.655 { 00:13:29.655 "name": null, 00:13:29.655 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:29.655 "is_configured": false, 00:13:29.655 "data_offset": 2048, 00:13:29.655 "data_size": 63488 00:13:29.655 }, 00:13:29.655 { 00:13:29.655 "name": null, 00:13:29.655 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:29.655 "is_configured": false, 00:13:29.655 "data_offset": 2048, 00:13:29.655 "data_size": 63488 00:13:29.655 }, 00:13:29.655 { 00:13:29.655 "name": "pt4", 00:13:29.655 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:29.655 "is_configured": true, 00:13:29.655 "data_offset": 2048, 00:13:29.655 "data_size": 63488 00:13:29.655 } 00:13:29.655 ] 00:13:29.655 }' 00:13:29.655 12:05:36 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:29.655 12:05:36 -- common/autotest_common.sh@10 -- # set +x 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i = 1 )) 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:30.222 [2024-07-25 12:05:37.508068] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:30.222 [2024-07-25 12:05:37.508107] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.222 [2024-07-25 12:05:37.508137] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ac1750 00:13:30.222 [2024-07-25 12:05:37.508146] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.222 [2024-07-25 12:05:37.508392] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.222 [2024-07-25 12:05:37.508404] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:30.222 [2024-07-25 12:05:37.508447] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt2 00:13:30.222 [2024-07-25 12:05:37.508464] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:30.222 pt2 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:13:30.222 12:05:37 -- bdev/bdev_raid.sh@498 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:30.481 [2024-07-25 12:05:37.676497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:30.481 [2024-07-25 12:05:37.676518] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:30.481 [2024-07-25 12:05:37.676546] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1c68c70 00:13:30.481 [2024-07-25 12:05:37.676554] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:30.481 [2024-07-25 12:05:37.676743] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:30.481 [2024-07-25 12:05:37.676755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:30.481 [2024-07-25 12:05:37.676787] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev pt3 00:13:30.481 [2024-07-25 12:05:37.676797] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:30.481 [2024-07-25 12:05:37.676869] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1c6e090 00:13:30.481 [2024-07-25 12:05:37.676875] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:30.481 [2024-07-25 12:05:37.676984] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c5fc80 00:13:30.481 [2024-07-25 12:05:37.677068] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1c6e090 00:13:30.481 [2024-07-25 12:05:37.677074] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1c6e090 00:13:30.481 [2024-07-25 12:05:37.677137] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:30.481 pt3 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i++ )) 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@497 -- # (( i < num_base_bdevs - 1 )) 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@502 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.481 12:05:37 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.740 12:05:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:30.740 "name": "raid_bdev1", 00:13:30.740 "uuid": "4d800771-ed4f-4308-a159-cceca995056f", 00:13:30.740 "strip_size_kb": 0, 00:13:30.740 "state": "online", 00:13:30.740 "raid_level": "raid1", 00:13:30.740 "superblock": true, 00:13:30.740 "num_base_bdevs": 4, 00:13:30.740 "num_base_bdevs_discovered": 3, 00:13:30.740 "num_base_bdevs_operational": 3, 00:13:30.740 "base_bdevs_list": [ 00:13:30.740 { 00:13:30.740 "name": null, 00:13:30.740 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.740 "is_configured": false, 00:13:30.740 "data_offset": 2048, 00:13:30.740 "data_size": 63488 00:13:30.740 }, 00:13:30.740 { 00:13:30.740 "name": "pt2", 00:13:30.740 "uuid": "4edef9fa-7976-5f07-8ac9-77956dc51695", 00:13:30.740 "is_configured": true, 00:13:30.740 "data_offset": 2048, 00:13:30.740 "data_size": 63488 00:13:30.740 }, 00:13:30.740 { 00:13:30.740 "name": "pt3", 00:13:30.740 "uuid": "c467f976-27c4-5f1f-b5ac-18c5d1bee2d6", 00:13:30.740 "is_configured": true, 00:13:30.740 "data_offset": 2048, 00:13:30.740 "data_size": 63488 00:13:30.740 }, 00:13:30.740 { 00:13:30.740 "name": "pt4", 00:13:30.740 "uuid": "517250bb-cf31-5e6a-80ac-b7cc622e0cdc", 00:13:30.740 "is_configured": true, 00:13:30.740 "data_offset": 2048, 00:13:30.740 "data_size": 63488 00:13:30.740 } 00:13:30.740 ] 00:13:30.740 }' 00:13:30.740 12:05:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:30.740 12:05:37 -- common/autotest_common.sh@10 -- # set +x 00:13:31.307 12:05:38 -- bdev/bdev_raid.sh@506 -- # jq -r '.[] | .uuid' 00:13:31.307 12:05:38 -- bdev/bdev_raid.sh@506 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:31.307 [2024-07-25 12:05:38.530822] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:31.307 12:05:38 -- bdev/bdev_raid.sh@506 -- # '[' 4d800771-ed4f-4308-a159-cceca995056f '!=' 4d800771-ed4f-4308-a159-cceca995056f ']' 00:13:31.308 12:05:38 -- bdev/bdev_raid.sh@511 -- # killprocess 1252274 00:13:31.308 12:05:38 -- common/autotest_common.sh@926 -- # '[' -z 1252274 ']' 00:13:31.308 12:05:38 -- common/autotest_common.sh@930 -- # kill -0 1252274 00:13:31.308 12:05:38 -- common/autotest_common.sh@931 -- # uname 00:13:31.308 12:05:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:31.308 12:05:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1252274 00:13:31.308 12:05:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:31.308 12:05:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:31.308 12:05:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1252274' 00:13:31.308 killing process with pid 1252274 00:13:31.308 12:05:38 -- common/autotest_common.sh@945 -- # kill 1252274 00:13:31.308 [2024-07-25 12:05:38.601701] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:31.308 [2024-07-25 12:05:38.601745] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:31.308 [2024-07-25 12:05:38.601791] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:31.308 [2024-07-25 12:05:38.601799] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1c6e090 name raid_bdev1, state offline 00:13:31.308 12:05:38 -- common/autotest_common.sh@950 -- # wait 1252274 00:13:31.566 [2024-07-25 12:05:38.639443] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.566 12:05:38 -- bdev/bdev_raid.sh@513 -- # return 0 00:13:31.566 00:13:31.566 real 0m15.749s 00:13:31.566 user 0m28.232s 00:13:31.566 sys 0m3.093s 00:13:31.566 12:05:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:31.566 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:31.566 ************************************ 00:13:31.566 END TEST raid_superblock_test 00:13:31.566 ************************************ 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@733 -- # '[' true = true ']' 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false 00:13:31.826 12:05:38 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:13:31.826 12:05:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:31.826 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:31.826 ************************************ 00:13:31.826 START TEST raid_rebuild_test 00:13:31.826 ************************************ 00:13:31.826 12:05:38 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false false 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@544 -- # raid_pid=1254725 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1254725 /var/tmp/spdk-raid.sock 00:13:31.826 12:05:38 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:31.826 12:05:38 -- common/autotest_common.sh@819 -- # '[' -z 1254725 ']' 00:13:31.826 12:05:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:31.826 12:05:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:31.826 12:05:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:31.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:31.826 12:05:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:31.826 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:13:31.826 [2024-07-25 12:05:38.970267] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:31.826 [2024-07-25 12:05:38.970321] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254725 ] 00:13:31.826 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:31.826 Zero copy mechanism will not be used. 00:13:31.826 [2024-07-25 12:05:39.056559] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.085 [2024-07-25 12:05:39.143609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.085 [2024-07-25 12:05:39.193279] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.085 [2024-07-25 12:05:39.193307] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.651 12:05:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:32.651 12:05:39 -- common/autotest_common.sh@852 -- # return 0 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:32.651 BaseBdev1 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:13:32.651 12:05:39 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:32.910 BaseBdev2 00:13:32.910 12:05:40 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:13:33.168 spare_malloc 00:13:33.168 12:05:40 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:33.168 spare_delay 00:13:33.168 12:05:40 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:13:33.427 [2024-07-25 12:05:40.603282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:33.427 [2024-07-25 12:05:40.603317] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:33.427 [2024-07-25 12:05:40.603334] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x112a070 00:13:33.427 [2024-07-25 12:05:40.603344] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:33.427 [2024-07-25 12:05:40.604482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:33.427 [2024-07-25 12:05:40.604505] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:33.427 spare 00:13:33.427 12:05:40 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:13:33.686 [2024-07-25 12:05:40.779752] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.686 [2024-07-25 12:05:40.780683] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:33.686 [2024-07-25 12:05:40.780739] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x11ea2d0 00:13:33.686 [2024-07-25 12:05:40.780746] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:33.686 [2024-07-25 12:05:40.780897] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11edc20 00:13:33.686 [2024-07-25 12:05:40.780978] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x11ea2d0 00:13:33.686 [2024-07-25 12:05:40.780984] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x11ea2d0 00:13:33.686 [2024-07-25 12:05:40.781063] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:33.686 "name": "raid_bdev1", 00:13:33.686 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:33.686 "strip_size_kb": 0, 00:13:33.686 "state": "online", 00:13:33.686 "raid_level": "raid1", 00:13:33.686 "superblock": false, 00:13:33.686 "num_base_bdevs": 2, 00:13:33.686 "num_base_bdevs_discovered": 2, 00:13:33.686 "num_base_bdevs_operational": 2, 00:13:33.686 "base_bdevs_list": [ 00:13:33.686 { 00:13:33.686 "name": "BaseBdev1", 00:13:33.686 "uuid": "6c2b1280-a68a-470c-942a-1fa0c71c1513", 00:13:33.686 "is_configured": true, 00:13:33.686 "data_offset": 0, 00:13:33.686 "data_size": 65536 00:13:33.686 }, 00:13:33.686 { 00:13:33.686 "name": "BaseBdev2", 00:13:33.686 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:33.686 "is_configured": true, 00:13:33.686 "data_offset": 0, 00:13:33.686 "data_size": 65536 00:13:33.686 } 00:13:33.686 ] 00:13:33.686 }' 00:13:33.686 12:05:40 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:33.686 12:05:40 -- common/autotest_common.sh@10 -- # set +x 00:13:34.253 12:05:41 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:34.253 12:05:41 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:13:34.512 [2024-07-25 12:05:41.597969] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:13:34.512 12:05:41 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@12 -- # local i 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.512 12:05:41 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:34.771 [2024-07-25 12:05:41.942783] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1129480 00:13:34.771 /dev/nbd0 00:13:34.771 12:05:41 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:34.771 12:05:41 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:34.771 12:05:41 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:34.771 12:05:41 -- common/autotest_common.sh@857 -- # local i 00:13:34.771 12:05:41 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:34.771 12:05:41 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:34.771 12:05:41 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:34.771 12:05:41 -- common/autotest_common.sh@861 -- # break 00:13:34.771 12:05:41 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:34.771 12:05:41 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:34.771 12:05:41 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.771 1+0 records in 00:13:34.771 1+0 records out 00:13:34.771 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000182437 s, 22.5 MB/s 00:13:34.771 12:05:41 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:34.771 12:05:41 -- common/autotest_common.sh@874 -- # size=4096 00:13:34.771 12:05:41 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:34.771 12:05:41 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:34.771 12:05:41 -- common/autotest_common.sh@877 -- # return 0 00:13:34.771 12:05:41 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.771 12:05:41 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:34.771 12:05:41 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:13:34.771 12:05:41 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:13:34.771 12:05:41 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:38.958 65536+0 records in 00:13:38.958 65536+0 records out 00:13:38.958 33554432 bytes (34 MB, 32 MiB) copied, 3.49185 s, 9.6 MB/s 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@51 -- # local i 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:38.959 [2024-07-25 12:05:45.652955] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@41 -- # break 00:13:38.959 12:05:45 -- bdev/nbd_common.sh@45 -- # return 0 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:13:38.959 [2024-07-25 12:05:45.813413] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:38.959 "name": "raid_bdev1", 00:13:38.959 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:38.959 "strip_size_kb": 0, 00:13:38.959 "state": "online", 00:13:38.959 "raid_level": "raid1", 00:13:38.959 "superblock": false, 00:13:38.959 "num_base_bdevs": 2, 00:13:38.959 "num_base_bdevs_discovered": 1, 00:13:38.959 "num_base_bdevs_operational": 1, 00:13:38.959 "base_bdevs_list": [ 00:13:38.959 { 00:13:38.959 "name": null, 00:13:38.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.959 "is_configured": false, 00:13:38.959 "data_offset": 0, 00:13:38.959 "data_size": 65536 00:13:38.959 }, 00:13:38.959 { 00:13:38.959 "name": "BaseBdev2", 00:13:38.959 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:38.959 "is_configured": true, 00:13:38.959 "data_offset": 0, 00:13:38.959 "data_size": 65536 00:13:38.959 } 00:13:38.959 ] 00:13:38.959 }' 00:13:38.959 12:05:45 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:38.959 12:05:45 -- common/autotest_common.sh@10 -- # set +x 00:13:39.217 12:05:46 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:13:39.475 [2024-07-25 12:05:46.639541] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:13:39.475 [2024-07-25 12:05:46.639564] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:39.475 [2024-07-25 12:05:46.643997] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ea5b0 00:13:39.475 [2024-07-25 12:05:46.645808] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:39.475 12:05:46 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:40.410 12:05:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:40.669 "name": "raid_bdev1", 00:13:40.669 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:40.669 "strip_size_kb": 0, 00:13:40.669 "state": "online", 00:13:40.669 "raid_level": "raid1", 00:13:40.669 "superblock": false, 00:13:40.669 "num_base_bdevs": 2, 00:13:40.669 "num_base_bdevs_discovered": 2, 00:13:40.669 "num_base_bdevs_operational": 2, 00:13:40.669 "process": { 00:13:40.669 "type": "rebuild", 00:13:40.669 "target": "spare", 00:13:40.669 "progress": { 00:13:40.669 "blocks": 22528, 00:13:40.669 "percent": 34 00:13:40.669 } 00:13:40.669 }, 00:13:40.669 "base_bdevs_list": [ 00:13:40.669 { 00:13:40.669 "name": "spare", 00:13:40.669 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:40.669 "is_configured": true, 00:13:40.669 "data_offset": 0, 00:13:40.669 "data_size": 65536 00:13:40.669 }, 00:13:40.669 { 00:13:40.669 "name": "BaseBdev2", 00:13:40.669 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:40.669 "is_configured": true, 00:13:40.669 "data_offset": 0, 00:13:40.669 "data_size": 65536 00:13:40.669 } 00:13:40.669 ] 00:13:40.669 }' 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:13:40.669 12:05:47 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:13:40.928 [2024-07-25 12:05:48.084527] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:40.928 [2024-07-25 12:05:48.156717] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:40.928 [2024-07-25 12:05:48.156749] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.928 12:05:48 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.187 12:05:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:41.187 "name": "raid_bdev1", 00:13:41.187 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:41.187 "strip_size_kb": 0, 00:13:41.187 "state": "online", 00:13:41.187 "raid_level": "raid1", 00:13:41.187 "superblock": false, 00:13:41.187 "num_base_bdevs": 2, 00:13:41.187 "num_base_bdevs_discovered": 1, 00:13:41.187 "num_base_bdevs_operational": 1, 00:13:41.187 "base_bdevs_list": [ 00:13:41.187 { 00:13:41.187 "name": null, 00:13:41.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.187 "is_configured": false, 00:13:41.187 "data_offset": 0, 00:13:41.187 "data_size": 65536 00:13:41.187 }, 00:13:41.187 { 00:13:41.187 "name": "BaseBdev2", 00:13:41.187 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:41.187 "is_configured": true, 00:13:41.187 "data_offset": 0, 00:13:41.187 "data_size": 65536 00:13:41.187 } 00:13:41.187 ] 00:13:41.187 }' 00:13:41.187 12:05:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:41.187 12:05:48 -- common/autotest_common.sh@10 -- # set +x 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@185 -- # local target=none 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.786 12:05:48 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:41.786 "name": "raid_bdev1", 00:13:41.786 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:41.786 "strip_size_kb": 0, 00:13:41.786 "state": "online", 00:13:41.786 "raid_level": "raid1", 00:13:41.786 "superblock": false, 00:13:41.786 "num_base_bdevs": 2, 00:13:41.786 "num_base_bdevs_discovered": 1, 00:13:41.786 "num_base_bdevs_operational": 1, 00:13:41.786 "base_bdevs_list": [ 00:13:41.786 { 00:13:41.786 "name": null, 00:13:41.786 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.786 "is_configured": false, 00:13:41.786 "data_offset": 0, 00:13:41.786 "data_size": 65536 00:13:41.786 }, 00:13:41.786 { 00:13:41.786 "name": "BaseBdev2", 00:13:41.786 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:41.786 "is_configured": true, 00:13:41.786 "data_offset": 0, 00:13:41.786 "data_size": 65536 00:13:41.786 } 00:13:41.786 ] 00:13:41.786 }' 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:13:41.786 12:05:49 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:13:42.045 [2024-07-25 12:05:49.243690] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:13:42.045 [2024-07-25 12:05:49.243713] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:42.045 [2024-07-25 12:05:49.248169] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x11ea5b0 00:13:42.045 [2024-07-25 12:05:49.249218] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:42.045 12:05:49 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:42.981 12:05:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:43.240 "name": "raid_bdev1", 00:13:43.240 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:43.240 "strip_size_kb": 0, 00:13:43.240 "state": "online", 00:13:43.240 "raid_level": "raid1", 00:13:43.240 "superblock": false, 00:13:43.240 "num_base_bdevs": 2, 00:13:43.240 "num_base_bdevs_discovered": 2, 00:13:43.240 "num_base_bdevs_operational": 2, 00:13:43.240 "process": { 00:13:43.240 "type": "rebuild", 00:13:43.240 "target": "spare", 00:13:43.240 "progress": { 00:13:43.240 "blocks": 22528, 00:13:43.240 "percent": 34 00:13:43.240 } 00:13:43.240 }, 00:13:43.240 "base_bdevs_list": [ 00:13:43.240 { 00:13:43.240 "name": "spare", 00:13:43.240 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:43.240 "is_configured": true, 00:13:43.240 "data_offset": 0, 00:13:43.240 "data_size": 65536 00:13:43.240 }, 00:13:43.240 { 00:13:43.240 "name": "BaseBdev2", 00:13:43.240 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:43.240 "is_configured": true, 00:13:43.240 "data_offset": 0, 00:13:43.240 "data_size": 65536 00:13:43.240 } 00:13:43.240 ] 00:13:43.240 }' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@657 -- # local timeout=289 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.240 12:05:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:43.498 "name": "raid_bdev1", 00:13:43.498 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:43.498 "strip_size_kb": 0, 00:13:43.498 "state": "online", 00:13:43.498 "raid_level": "raid1", 00:13:43.498 "superblock": false, 00:13:43.498 "num_base_bdevs": 2, 00:13:43.498 "num_base_bdevs_discovered": 2, 00:13:43.498 "num_base_bdevs_operational": 2, 00:13:43.498 "process": { 00:13:43.498 "type": "rebuild", 00:13:43.498 "target": "spare", 00:13:43.498 "progress": { 00:13:43.498 "blocks": 26624, 00:13:43.498 "percent": 40 00:13:43.498 } 00:13:43.498 }, 00:13:43.498 "base_bdevs_list": [ 00:13:43.498 { 00:13:43.498 "name": "spare", 00:13:43.498 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:43.498 "is_configured": true, 00:13:43.498 "data_offset": 0, 00:13:43.498 "data_size": 65536 00:13:43.498 }, 00:13:43.498 { 00:13:43.498 "name": "BaseBdev2", 00:13:43.498 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:43.498 "is_configured": true, 00:13:43.498 "data_offset": 0, 00:13:43.498 "data_size": 65536 00:13:43.498 } 00:13:43.498 ] 00:13:43.498 }' 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:13:43.498 12:05:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:44.874 "name": "raid_bdev1", 00:13:44.874 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:44.874 "strip_size_kb": 0, 00:13:44.874 "state": "online", 00:13:44.874 "raid_level": "raid1", 00:13:44.874 "superblock": false, 00:13:44.874 "num_base_bdevs": 2, 00:13:44.874 "num_base_bdevs_discovered": 2, 00:13:44.874 "num_base_bdevs_operational": 2, 00:13:44.874 "process": { 00:13:44.874 "type": "rebuild", 00:13:44.874 "target": "spare", 00:13:44.874 "progress": { 00:13:44.874 "blocks": 53248, 00:13:44.874 "percent": 81 00:13:44.874 } 00:13:44.874 }, 00:13:44.874 "base_bdevs_list": [ 00:13:44.874 { 00:13:44.874 "name": "spare", 00:13:44.874 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:44.874 "is_configured": true, 00:13:44.874 "data_offset": 0, 00:13:44.874 "data_size": 65536 00:13:44.874 }, 00:13:44.874 { 00:13:44.874 "name": "BaseBdev2", 00:13:44.874 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:44.874 "is_configured": true, 00:13:44.874 "data_offset": 0, 00:13:44.874 "data_size": 65536 00:13:44.874 } 00:13:44.874 ] 00:13:44.874 }' 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:13:44.874 12:05:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:13:45.440 [2024-07-25 12:05:52.472511] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:45.440 [2024-07-25 12:05:52.472551] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:45.440 [2024-07-25 12:05:52.472575] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.699 12:05:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.989 12:05:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:45.989 "name": "raid_bdev1", 00:13:45.989 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:45.989 "strip_size_kb": 0, 00:13:45.989 "state": "online", 00:13:45.990 "raid_level": "raid1", 00:13:45.990 "superblock": false, 00:13:45.990 "num_base_bdevs": 2, 00:13:45.990 "num_base_bdevs_discovered": 2, 00:13:45.990 "num_base_bdevs_operational": 2, 00:13:45.990 "base_bdevs_list": [ 00:13:45.990 { 00:13:45.990 "name": "spare", 00:13:45.990 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:45.990 "is_configured": true, 00:13:45.990 "data_offset": 0, 00:13:45.990 "data_size": 65536 00:13:45.990 }, 00:13:45.990 { 00:13:45.990 "name": "BaseBdev2", 00:13:45.990 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:45.990 "is_configured": true, 00:13:45.990 "data_offset": 0, 00:13:45.990 "data_size": 65536 00:13:45.990 } 00:13:45.990 ] 00:13:45.990 }' 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@660 -- # break 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@185 -- # local target=none 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.990 12:05:53 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:46.249 "name": "raid_bdev1", 00:13:46.249 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:46.249 "strip_size_kb": 0, 00:13:46.249 "state": "online", 00:13:46.249 "raid_level": "raid1", 00:13:46.249 "superblock": false, 00:13:46.249 "num_base_bdevs": 2, 00:13:46.249 "num_base_bdevs_discovered": 2, 00:13:46.249 "num_base_bdevs_operational": 2, 00:13:46.249 "base_bdevs_list": [ 00:13:46.249 { 00:13:46.249 "name": "spare", 00:13:46.249 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:46.249 "is_configured": true, 00:13:46.249 "data_offset": 0, 00:13:46.249 "data_size": 65536 00:13:46.249 }, 00:13:46.249 { 00:13:46.249 "name": "BaseBdev2", 00:13:46.249 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:46.249 "is_configured": true, 00:13:46.249 "data_offset": 0, 00:13:46.249 "data_size": 65536 00:13:46.249 } 00:13:46.249 ] 00:13:46.249 }' 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:46.249 12:05:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.507 12:05:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:46.507 "name": "raid_bdev1", 00:13:46.507 "uuid": "58814455-a109-464a-bbfd-e4ea3d1187bd", 00:13:46.507 "strip_size_kb": 0, 00:13:46.507 "state": "online", 00:13:46.507 "raid_level": "raid1", 00:13:46.507 "superblock": false, 00:13:46.507 "num_base_bdevs": 2, 00:13:46.507 "num_base_bdevs_discovered": 2, 00:13:46.507 "num_base_bdevs_operational": 2, 00:13:46.507 "base_bdevs_list": [ 00:13:46.507 { 00:13:46.507 "name": "spare", 00:13:46.507 "uuid": "29467a15-1ac5-5c65-8a08-a1499aebe959", 00:13:46.507 "is_configured": true, 00:13:46.507 "data_offset": 0, 00:13:46.507 "data_size": 65536 00:13:46.507 }, 00:13:46.507 { 00:13:46.507 "name": "BaseBdev2", 00:13:46.507 "uuid": "70c53546-6995-48b5-a6b5-afff058152c5", 00:13:46.507 "is_configured": true, 00:13:46.507 "data_offset": 0, 00:13:46.507 "data_size": 65536 00:13:46.507 } 00:13:46.507 ] 00:13:46.507 }' 00:13:46.507 12:05:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:46.507 12:05:53 -- common/autotest_common.sh@10 -- # set +x 00:13:47.075 12:05:54 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:47.075 [2024-07-25 12:05:54.289077] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.075 [2024-07-25 12:05:54.289101] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:47.075 [2024-07-25 12:05:54.289143] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:47.075 [2024-07-25 12:05:54.289180] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:47.075 [2024-07-25 12:05:54.289188] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x11ea2d0 name raid_bdev1, state offline 00:13:47.075 12:05:54 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.075 12:05:54 -- bdev/bdev_raid.sh@671 -- # jq length 00:13:47.334 12:05:54 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:13:47.334 12:05:54 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:13:47.334 12:05:54 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@12 -- # local i 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.334 12:05:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:47.334 /dev/nbd0 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:47.593 12:05:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:47.593 12:05:54 -- common/autotest_common.sh@857 -- # local i 00:13:47.593 12:05:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:47.593 12:05:54 -- common/autotest_common.sh@861 -- # break 00:13:47.593 12:05:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.593 1+0 records in 00:13:47.593 1+0 records out 00:13:47.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204317 s, 20.0 MB/s 00:13:47.593 12:05:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:47.593 12:05:54 -- common/autotest_common.sh@874 -- # size=4096 00:13:47.593 12:05:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:47.593 12:05:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:47.593 12:05:54 -- common/autotest_common.sh@877 -- # return 0 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:13:47.593 /dev/nbd1 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:47.593 12:05:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:13:47.593 12:05:54 -- common/autotest_common.sh@857 -- # local i 00:13:47.593 12:05:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:13:47.593 12:05:54 -- common/autotest_common.sh@861 -- # break 00:13:47.593 12:05:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:47.593 12:05:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:47.593 1+0 records in 00:13:47.593 1+0 records out 00:13:47.593 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000316731 s, 12.9 MB/s 00:13:47.593 12:05:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:47.593 12:05:54 -- common/autotest_common.sh@874 -- # size=4096 00:13:47.593 12:05:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:47.593 12:05:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:47.593 12:05:54 -- common/autotest_common.sh@877 -- # return 0 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:47.593 12:05:54 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:47.593 12:05:54 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:47.852 12:05:54 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@51 -- # local i 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.852 12:05:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@41 -- # break 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@45 -- # return 0 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:47.852 12:05:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@41 -- # break 00:13:48.111 12:05:55 -- bdev/nbd_common.sh@45 -- # return 0 00:13:48.111 12:05:55 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:13:48.111 12:05:55 -- bdev/bdev_raid.sh@709 -- # killprocess 1254725 00:13:48.111 12:05:55 -- common/autotest_common.sh@926 -- # '[' -z 1254725 ']' 00:13:48.111 12:05:55 -- common/autotest_common.sh@930 -- # kill -0 1254725 00:13:48.111 12:05:55 -- common/autotest_common.sh@931 -- # uname 00:13:48.111 12:05:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:48.111 12:05:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1254725 00:13:48.111 12:05:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:48.111 12:05:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:48.111 12:05:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1254725' 00:13:48.111 killing process with pid 1254725 00:13:48.111 12:05:55 -- common/autotest_common.sh@945 -- # kill 1254725 00:13:48.111 Received shutdown signal, test time was about 60.000000 seconds 00:13:48.111 00:13:48.111 Latency(us) 00:13:48.111 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:48.111 =================================================================================================================== 00:13:48.111 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:48.111 [2024-07-25 12:05:55.302972] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:48.111 12:05:55 -- common/autotest_common.sh@950 -- # wait 1254725 00:13:48.111 [2024-07-25 12:05:55.327622] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@711 -- # return 0 00:13:48.370 00:13:48.370 real 0m16.617s 00:13:48.370 user 0m21.819s 00:13:48.370 sys 0m3.913s 00:13:48.370 12:05:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:48.370 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 ************************************ 00:13:48.370 END TEST raid_rebuild_test 00:13:48.370 ************************************ 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false 00:13:48.370 12:05:55 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:13:48.370 12:05:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:48.370 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.370 ************************************ 00:13:48.370 START TEST raid_rebuild_test_sb 00:13:48.370 ************************************ 00:13:48.370 12:05:55 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true false 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@544 -- # raid_pid=1257181 00:13:48.370 12:05:55 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1257181 /var/tmp/spdk-raid.sock 00:13:48.371 12:05:55 -- common/autotest_common.sh@819 -- # '[' -z 1257181 ']' 00:13:48.371 12:05:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:48.371 12:05:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:48.371 12:05:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:48.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:48.371 12:05:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:48.371 12:05:55 -- common/autotest_common.sh@10 -- # set +x 00:13:48.371 12:05:55 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:48.371 [2024-07-25 12:05:55.628923] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:13:48.371 [2024-07-25 12:05:55.628974] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1257181 ] 00:13:48.371 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:48.371 Zero copy mechanism will not be used. 00:13:48.629 [2024-07-25 12:05:55.730326] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.629 [2024-07-25 12:05:55.830131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:48.629 [2024-07-25 12:05:55.888131] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:48.629 [2024-07-25 12:05:55.888161] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:49.196 12:05:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:49.196 12:05:56 -- common/autotest_common.sh@852 -- # return 0 00:13:49.196 12:05:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:13:49.196 12:05:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:13:49.196 12:05:56 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:49.455 BaseBdev1_malloc 00:13:49.455 12:05:56 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:49.713 [2024-07-25 12:05:56.778519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:49.713 [2024-07-25 12:05:56.778557] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.713 [2024-07-25 12:05:56.778575] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb2a00 00:13:49.713 [2024-07-25 12:05:56.778584] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.713 [2024-07-25 12:05:56.779757] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.713 [2024-07-25 12:05:56.779778] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:49.713 BaseBdev1 00:13:49.713 12:05:56 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:13:49.713 12:05:56 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:13:49.713 12:05:56 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:49.713 BaseBdev2_malloc 00:13:49.713 12:05:56 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:49.972 [2024-07-25 12:05:57.119284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:49.972 [2024-07-25 12:05:57.119320] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.972 [2024-07-25 12:05:57.119337] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb35f0 00:13:49.972 [2024-07-25 12:05:57.119349] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.972 [2024-07-25 12:05:57.120466] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.972 [2024-07-25 12:05:57.120489] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:49.972 BaseBdev2 00:13:49.972 12:05:57 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:13:50.230 spare_malloc 00:13:50.230 12:05:57 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:50.230 spare_delay 00:13:50.230 12:05:57 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:13:50.489 [2024-07-25 12:05:57.613573] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:50.489 [2024-07-25 12:05:57.613606] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.489 [2024-07-25 12:05:57.613621] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb3f50 00:13:50.489 [2024-07-25 12:05:57.613630] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.489 [2024-07-25 12:05:57.614761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.489 [2024-07-25 12:05:57.614783] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:50.489 spare 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:13:50.489 [2024-07-25 12:05:57.778017] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:50.489 [2024-07-25 12:05:57.778794] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:50.489 [2024-07-25 12:05:57.778913] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bb55f0 00:13:50.489 [2024-07-25 12:05:57.778922] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.489 [2024-07-25 12:05:57.779040] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ba9ab0 00:13:50.489 [2024-07-25 12:05:57.779130] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bb55f0 00:13:50.489 [2024-07-25 12:05:57.779137] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bb55f0 00:13:50.489 [2024-07-25 12:05:57.779195] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:50.489 12:05:57 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:50.747 12:05:57 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:50.747 12:05:57 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.747 12:05:57 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:50.747 "name": "raid_bdev1", 00:13:50.747 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:13:50.747 "strip_size_kb": 0, 00:13:50.747 "state": "online", 00:13:50.747 "raid_level": "raid1", 00:13:50.747 "superblock": true, 00:13:50.747 "num_base_bdevs": 2, 00:13:50.747 "num_base_bdevs_discovered": 2, 00:13:50.747 "num_base_bdevs_operational": 2, 00:13:50.747 "base_bdevs_list": [ 00:13:50.747 { 00:13:50.747 "name": "BaseBdev1", 00:13:50.747 "uuid": "afafa052-4eda-5ff5-8e79-189207cad765", 00:13:50.747 "is_configured": true, 00:13:50.747 "data_offset": 2048, 00:13:50.747 "data_size": 63488 00:13:50.747 }, 00:13:50.747 { 00:13:50.747 "name": "BaseBdev2", 00:13:50.747 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:13:50.747 "is_configured": true, 00:13:50.747 "data_offset": 2048, 00:13:50.747 "data_size": 63488 00:13:50.747 } 00:13:50.747 ] 00:13:50.748 }' 00:13:50.748 12:05:57 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:50.748 12:05:57 -- common/autotest_common.sh@10 -- # set +x 00:13:51.314 12:05:58 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.314 12:05:58 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:13:51.314 [2024-07-25 12:05:58.616287] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:13:51.573 12:05:58 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@12 -- # local i 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.573 12:05:58 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:51.831 [2024-07-25 12:05:58.953051] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ba9ab0 00:13:51.831 /dev/nbd0 00:13:51.831 12:05:58 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:51.831 12:05:58 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:51.831 12:05:58 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:13:51.831 12:05:58 -- common/autotest_common.sh@857 -- # local i 00:13:51.831 12:05:58 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:13:51.831 12:05:58 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:13:51.831 12:05:58 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:13:51.831 12:05:58 -- common/autotest_common.sh@861 -- # break 00:13:51.832 12:05:58 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:13:51.832 12:05:58 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:13:51.832 12:05:58 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:51.832 1+0 records in 00:13:51.832 1+0 records out 00:13:51.832 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260696 s, 15.7 MB/s 00:13:51.832 12:05:58 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:51.832 12:05:59 -- common/autotest_common.sh@874 -- # size=4096 00:13:51.832 12:05:59 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:13:51.832 12:05:59 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:13:51.832 12:05:59 -- common/autotest_common.sh@877 -- # return 0 00:13:51.832 12:05:59 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:51.832 12:05:59 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:51.832 12:05:59 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:13:51.832 12:05:59 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:13:51.832 12:05:59 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:55.111 63488+0 records in 00:13:55.112 63488+0 records out 00:13:55.112 32505856 bytes (33 MB, 31 MiB) copied, 3.26987 s, 9.9 MB/s 00:13:55.112 12:06:02 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@51 -- # local i 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.112 12:06:02 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@41 -- # break 00:13:55.369 12:06:02 -- bdev/nbd_common.sh@45 -- # return 0 00:13:55.369 [2024-07-25 12:06:02.458440] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:13:55.369 [2024-07-25 12:06:02.614881] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.369 12:06:02 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:55.626 12:06:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:55.626 "name": "raid_bdev1", 00:13:55.626 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:13:55.626 "strip_size_kb": 0, 00:13:55.626 "state": "online", 00:13:55.626 "raid_level": "raid1", 00:13:55.626 "superblock": true, 00:13:55.626 "num_base_bdevs": 2, 00:13:55.626 "num_base_bdevs_discovered": 1, 00:13:55.626 "num_base_bdevs_operational": 1, 00:13:55.626 "base_bdevs_list": [ 00:13:55.626 { 00:13:55.626 "name": null, 00:13:55.626 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:55.627 "is_configured": false, 00:13:55.627 "data_offset": 2048, 00:13:55.627 "data_size": 63488 00:13:55.627 }, 00:13:55.627 { 00:13:55.627 "name": "BaseBdev2", 00:13:55.627 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:13:55.627 "is_configured": true, 00:13:55.627 "data_offset": 2048, 00:13:55.627 "data_size": 63488 00:13:55.627 } 00:13:55.627 ] 00:13:55.627 }' 00:13:55.627 12:06:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:55.627 12:06:02 -- common/autotest_common.sh@10 -- # set +x 00:13:56.193 12:06:03 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:13:56.193 [2024-07-25 12:06:03.428973] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:13:56.193 [2024-07-25 12:06:03.428998] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:56.193 [2024-07-25 12:06:03.433521] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bb76f0 00:13:56.193 [2024-07-25 12:06:03.435261] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:56.193 12:06:03 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:57.580 "name": "raid_bdev1", 00:13:57.580 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:13:57.580 "strip_size_kb": 0, 00:13:57.580 "state": "online", 00:13:57.580 "raid_level": "raid1", 00:13:57.580 "superblock": true, 00:13:57.580 "num_base_bdevs": 2, 00:13:57.580 "num_base_bdevs_discovered": 2, 00:13:57.580 "num_base_bdevs_operational": 2, 00:13:57.580 "process": { 00:13:57.580 "type": "rebuild", 00:13:57.580 "target": "spare", 00:13:57.580 "progress": { 00:13:57.580 "blocks": 22528, 00:13:57.580 "percent": 35 00:13:57.580 } 00:13:57.580 }, 00:13:57.580 "base_bdevs_list": [ 00:13:57.580 { 00:13:57.580 "name": "spare", 00:13:57.580 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:13:57.580 "is_configured": true, 00:13:57.580 "data_offset": 2048, 00:13:57.580 "data_size": 63488 00:13:57.580 }, 00:13:57.580 { 00:13:57.580 "name": "BaseBdev2", 00:13:57.580 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:13:57.580 "is_configured": true, 00:13:57.580 "data_offset": 2048, 00:13:57.580 "data_size": 63488 00:13:57.580 } 00:13:57.580 ] 00:13:57.580 }' 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:13:57.580 12:06:04 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:13:57.580 [2024-07-25 12:06:04.870031] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:57.838 [2024-07-25 12:06:04.946221] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:57.838 [2024-07-25 12:06:04.946255] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@125 -- # local tmp 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:57.838 12:06:04 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.838 12:06:05 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:13:57.838 "name": "raid_bdev1", 00:13:57.838 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:13:57.838 "strip_size_kb": 0, 00:13:57.838 "state": "online", 00:13:57.838 "raid_level": "raid1", 00:13:57.839 "superblock": true, 00:13:57.839 "num_base_bdevs": 2, 00:13:57.839 "num_base_bdevs_discovered": 1, 00:13:57.839 "num_base_bdevs_operational": 1, 00:13:57.839 "base_bdevs_list": [ 00:13:57.839 { 00:13:57.839 "name": null, 00:13:57.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:57.839 "is_configured": false, 00:13:57.839 "data_offset": 2048, 00:13:57.839 "data_size": 63488 00:13:57.839 }, 00:13:57.839 { 00:13:57.839 "name": "BaseBdev2", 00:13:57.839 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:13:57.839 "is_configured": true, 00:13:57.839 "data_offset": 2048, 00:13:57.839 "data_size": 63488 00:13:57.839 } 00:13:57.839 ] 00:13:57.839 }' 00:13:57.839 12:06:05 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:13:57.839 12:06:05 -- common/autotest_common.sh@10 -- # set +x 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@185 -- # local target=none 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.404 12:06:05 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.662 12:06:05 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:13:58.662 "name": "raid_bdev1", 00:13:58.663 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:13:58.663 "strip_size_kb": 0, 00:13:58.663 "state": "online", 00:13:58.663 "raid_level": "raid1", 00:13:58.663 "superblock": true, 00:13:58.663 "num_base_bdevs": 2, 00:13:58.663 "num_base_bdevs_discovered": 1, 00:13:58.663 "num_base_bdevs_operational": 1, 00:13:58.663 "base_bdevs_list": [ 00:13:58.663 { 00:13:58.663 "name": null, 00:13:58.663 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:58.663 "is_configured": false, 00:13:58.663 "data_offset": 2048, 00:13:58.663 "data_size": 63488 00:13:58.663 }, 00:13:58.663 { 00:13:58.663 "name": "BaseBdev2", 00:13:58.663 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:13:58.663 "is_configured": true, 00:13:58.663 "data_offset": 2048, 00:13:58.663 "data_size": 63488 00:13:58.663 } 00:13:58.663 ] 00:13:58.663 }' 00:13:58.663 12:06:05 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:13:58.663 12:06:05 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:13:58.663 12:06:05 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:13:58.663 12:06:05 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:13:58.663 12:06:05 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:13:58.921 [2024-07-25 12:06:06.029919] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:13:58.921 [2024-07-25 12:06:06.029946] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:58.921 [2024-07-25 12:06:06.034377] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1c76060 00:13:58.921 [2024-07-25 12:06:06.035469] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:58.921 12:06:06 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:59.854 12:06:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:00.113 "name": "raid_bdev1", 00:14:00.113 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:00.113 "strip_size_kb": 0, 00:14:00.113 "state": "online", 00:14:00.113 "raid_level": "raid1", 00:14:00.113 "superblock": true, 00:14:00.113 "num_base_bdevs": 2, 00:14:00.113 "num_base_bdevs_discovered": 2, 00:14:00.113 "num_base_bdevs_operational": 2, 00:14:00.113 "process": { 00:14:00.113 "type": "rebuild", 00:14:00.113 "target": "spare", 00:14:00.113 "progress": { 00:14:00.113 "blocks": 22528, 00:14:00.113 "percent": 35 00:14:00.113 } 00:14:00.113 }, 00:14:00.113 "base_bdevs_list": [ 00:14:00.113 { 00:14:00.113 "name": "spare", 00:14:00.113 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:00.113 "is_configured": true, 00:14:00.113 "data_offset": 2048, 00:14:00.113 "data_size": 63488 00:14:00.113 }, 00:14:00.113 { 00:14:00.113 "name": "BaseBdev2", 00:14:00.113 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:00.113 "is_configured": true, 00:14:00.113 "data_offset": 2048, 00:14:00.113 "data_size": 63488 00:14:00.113 } 00:14:00.113 ] 00:14:00.113 }' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:14:00.113 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@657 -- # local timeout=306 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:00.113 12:06:07 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:00.372 "name": "raid_bdev1", 00:14:00.372 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:00.372 "strip_size_kb": 0, 00:14:00.372 "state": "online", 00:14:00.372 "raid_level": "raid1", 00:14:00.372 "superblock": true, 00:14:00.372 "num_base_bdevs": 2, 00:14:00.372 "num_base_bdevs_discovered": 2, 00:14:00.372 "num_base_bdevs_operational": 2, 00:14:00.372 "process": { 00:14:00.372 "type": "rebuild", 00:14:00.372 "target": "spare", 00:14:00.372 "progress": { 00:14:00.372 "blocks": 28672, 00:14:00.372 "percent": 45 00:14:00.372 } 00:14:00.372 }, 00:14:00.372 "base_bdevs_list": [ 00:14:00.372 { 00:14:00.372 "name": "spare", 00:14:00.372 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:00.372 "is_configured": true, 00:14:00.372 "data_offset": 2048, 00:14:00.372 "data_size": 63488 00:14:00.372 }, 00:14:00.372 { 00:14:00.372 "name": "BaseBdev2", 00:14:00.372 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:00.372 "is_configured": true, 00:14:00.372 "data_offset": 2048, 00:14:00.372 "data_size": 63488 00:14:00.372 } 00:14:00.372 ] 00:14:00.372 }' 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:00.372 12:06:07 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:01.307 12:06:08 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:01.565 "name": "raid_bdev1", 00:14:01.565 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:01.565 "strip_size_kb": 0, 00:14:01.565 "state": "online", 00:14:01.565 "raid_level": "raid1", 00:14:01.565 "superblock": true, 00:14:01.565 "num_base_bdevs": 2, 00:14:01.565 "num_base_bdevs_discovered": 2, 00:14:01.565 "num_base_bdevs_operational": 2, 00:14:01.565 "process": { 00:14:01.565 "type": "rebuild", 00:14:01.565 "target": "spare", 00:14:01.565 "progress": { 00:14:01.565 "blocks": 53248, 00:14:01.565 "percent": 83 00:14:01.565 } 00:14:01.565 }, 00:14:01.565 "base_bdevs_list": [ 00:14:01.565 { 00:14:01.565 "name": "spare", 00:14:01.565 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:01.565 "is_configured": true, 00:14:01.565 "data_offset": 2048, 00:14:01.565 "data_size": 63488 00:14:01.565 }, 00:14:01.565 { 00:14:01.565 "name": "BaseBdev2", 00:14:01.565 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:01.565 "is_configured": true, 00:14:01.565 "data_offset": 2048, 00:14:01.565 "data_size": 63488 00:14:01.565 } 00:14:01.565 ] 00:14:01.565 }' 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.565 12:06:08 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:02.161 [2024-07-25 12:06:09.158201] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:02.161 [2024-07-25 12:06:09.158244] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:02.161 [2024-07-25 12:06:09.158308] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.740 12:06:09 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.740 12:06:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:02.740 "name": "raid_bdev1", 00:14:02.740 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:02.740 "strip_size_kb": 0, 00:14:02.740 "state": "online", 00:14:02.740 "raid_level": "raid1", 00:14:02.740 "superblock": true, 00:14:02.740 "num_base_bdevs": 2, 00:14:02.740 "num_base_bdevs_discovered": 2, 00:14:02.740 "num_base_bdevs_operational": 2, 00:14:02.740 "base_bdevs_list": [ 00:14:02.740 { 00:14:02.740 "name": "spare", 00:14:02.740 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:02.740 "is_configured": true, 00:14:02.740 "data_offset": 2048, 00:14:02.740 "data_size": 63488 00:14:02.740 }, 00:14:02.740 { 00:14:02.740 "name": "BaseBdev2", 00:14:02.740 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:02.740 "is_configured": true, 00:14:02.740 "data_offset": 2048, 00:14:02.740 "data_size": 63488 00:14:02.740 } 00:14:02.740 ] 00:14:02.740 }' 00:14:02.740 12:06:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@660 -- # break 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.997 12:06:10 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:02.998 "name": "raid_bdev1", 00:14:02.998 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:02.998 "strip_size_kb": 0, 00:14:02.998 "state": "online", 00:14:02.998 "raid_level": "raid1", 00:14:02.998 "superblock": true, 00:14:02.998 "num_base_bdevs": 2, 00:14:02.998 "num_base_bdevs_discovered": 2, 00:14:02.998 "num_base_bdevs_operational": 2, 00:14:02.998 "base_bdevs_list": [ 00:14:02.998 { 00:14:02.998 "name": "spare", 00:14:02.998 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:02.998 "is_configured": true, 00:14:02.998 "data_offset": 2048, 00:14:02.998 "data_size": 63488 00:14:02.998 }, 00:14:02.998 { 00:14:02.998 "name": "BaseBdev2", 00:14:02.998 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:02.998 "is_configured": true, 00:14:02.998 "data_offset": 2048, 00:14:02.998 "data_size": 63488 00:14:02.998 } 00:14:02.998 ] 00:14:02.998 }' 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:02.998 12:06:10 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:03.255 "name": "raid_bdev1", 00:14:03.255 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:03.255 "strip_size_kb": 0, 00:14:03.255 "state": "online", 00:14:03.255 "raid_level": "raid1", 00:14:03.255 "superblock": true, 00:14:03.255 "num_base_bdevs": 2, 00:14:03.255 "num_base_bdevs_discovered": 2, 00:14:03.255 "num_base_bdevs_operational": 2, 00:14:03.255 "base_bdevs_list": [ 00:14:03.255 { 00:14:03.255 "name": "spare", 00:14:03.255 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:03.255 "is_configured": true, 00:14:03.255 "data_offset": 2048, 00:14:03.255 "data_size": 63488 00:14:03.255 }, 00:14:03.255 { 00:14:03.255 "name": "BaseBdev2", 00:14:03.255 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:03.255 "is_configured": true, 00:14:03.255 "data_offset": 2048, 00:14:03.255 "data_size": 63488 00:14:03.255 } 00:14:03.255 ] 00:14:03.255 }' 00:14:03.255 12:06:10 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:03.255 12:06:10 -- common/autotest_common.sh@10 -- # set +x 00:14:03.820 12:06:11 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:04.078 [2024-07-25 12:06:11.159736] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:04.078 [2024-07-25 12:06:11.159758] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:04.078 [2024-07-25 12:06:11.159798] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:04.078 [2024-07-25 12:06:11.159835] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:04.078 [2024-07-25 12:06:11.159842] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bb55f0 name raid_bdev1, state offline 00:14:04.078 12:06:11 -- bdev/bdev_raid.sh@671 -- # jq length 00:14:04.078 12:06:11 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:04.078 12:06:11 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:14:04.078 12:06:11 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:14:04.078 12:06:11 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@12 -- # local i 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:04.078 12:06:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:04.336 /dev/nbd0 00:14:04.336 12:06:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:04.336 12:06:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:04.336 12:06:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:04.336 12:06:11 -- common/autotest_common.sh@857 -- # local i 00:14:04.336 12:06:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:04.336 12:06:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:04.336 12:06:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:04.336 12:06:11 -- common/autotest_common.sh@861 -- # break 00:14:04.336 12:06:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:04.336 12:06:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:04.336 12:06:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.336 1+0 records in 00:14:04.336 1+0 records out 00:14:04.336 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000146474 s, 28.0 MB/s 00:14:04.336 12:06:11 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:04.336 12:06:11 -- common/autotest_common.sh@874 -- # size=4096 00:14:04.336 12:06:11 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:04.336 12:06:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:04.336 12:06:11 -- common/autotest_common.sh@877 -- # return 0 00:14:04.336 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.336 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:04.336 12:06:11 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:14:04.594 /dev/nbd1 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:04.594 12:06:11 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:04.594 12:06:11 -- common/autotest_common.sh@857 -- # local i 00:14:04.594 12:06:11 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:04.594 12:06:11 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:04.594 12:06:11 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:04.594 12:06:11 -- common/autotest_common.sh@861 -- # break 00:14:04.594 12:06:11 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:04.594 12:06:11 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:04.594 12:06:11 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:04.594 1+0 records in 00:14:04.594 1+0 records out 00:14:04.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000276545 s, 14.8 MB/s 00:14:04.594 12:06:11 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:04.594 12:06:11 -- common/autotest_common.sh@874 -- # size=4096 00:14:04.594 12:06:11 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:04.594 12:06:11 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:04.594 12:06:11 -- common/autotest_common.sh@877 -- # return 0 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:04.594 12:06:11 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:04.594 12:06:11 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@51 -- # local i 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.594 12:06:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@41 -- # break 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@45 -- # return 0 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:04.852 12:06:11 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:14:04.852 12:06:12 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@41 -- # break 00:14:05.109 12:06:12 -- bdev/nbd_common.sh@45 -- # return 0 00:14:05.109 12:06:12 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:14:05.109 12:06:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:14:05.109 12:06:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:14:05.109 12:06:12 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:14:05.109 12:06:12 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.368 [2024-07-25 12:06:12.487283] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.368 [2024-07-25 12:06:12.487318] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.368 [2024-07-25 12:06:12.487349] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1bb2c30 00:14:05.368 [2024-07-25 12:06:12.487357] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.368 [2024-07-25 12:06:12.488513] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.368 [2024-07-25 12:06:12.488536] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.368 [2024-07-25 12:06:12.488584] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:05.368 [2024-07-25 12:06:12.488601] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.368 BaseBdev1 00:14:05.368 12:06:12 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:14:05.368 12:06:12 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:14:05.368 12:06:12 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:14:05.368 12:06:12 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.626 [2024-07-25 12:06:12.820136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.626 [2024-07-25 12:06:12.820166] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.626 [2024-07-25 12:06:12.820181] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1babfc0 00:14:05.626 [2024-07-25 12:06:12.820189] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.626 [2024-07-25 12:06:12.820421] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.626 [2024-07-25 12:06:12.820448] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.626 [2024-07-25 12:06:12.820487] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:14:05.626 [2024-07-25 12:06:12.820495] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:14:05.626 [2024-07-25 12:06:12.820502] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:05.626 [2024-07-25 12:06:12.820511] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1da58e0 name raid_bdev1, state configuring 00:14:05.626 [2024-07-25 12:06:12.820532] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.626 BaseBdev2 00:14:05.626 12:06:12 -- bdev/bdev_raid.sh@701 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@702 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:05.885 [2024-07-25 12:06:13.152993] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.885 [2024-07-25 12:06:13.153020] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.885 [2024-07-25 12:06:13.153035] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ba94b0 00:14:05.885 [2024-07-25 12:06:13.153043] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.885 [2024-07-25 12:06:13.153291] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.885 [2024-07-25 12:06:13.153304] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.885 [2024-07-25 12:06:13.153351] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:14:05.885 [2024-07-25 12:06:13.153362] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:05.885 spare 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.885 12:06:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.144 [2024-07-25 12:06:13.253660] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1da5b60 00:14:06.144 [2024-07-25 12:06:13.253674] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:06.144 [2024-07-25 12:06:13.253815] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1da5fe0 00:14:06.144 [2024-07-25 12:06:13.253928] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1da5b60 00:14:06.144 [2024-07-25 12:06:13.253935] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1da5b60 00:14:06.144 [2024-07-25 12:06:13.254010] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.144 12:06:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:06.144 "name": "raid_bdev1", 00:14:06.144 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:06.144 "strip_size_kb": 0, 00:14:06.144 "state": "online", 00:14:06.144 "raid_level": "raid1", 00:14:06.144 "superblock": true, 00:14:06.144 "num_base_bdevs": 2, 00:14:06.144 "num_base_bdevs_discovered": 2, 00:14:06.144 "num_base_bdevs_operational": 2, 00:14:06.144 "base_bdevs_list": [ 00:14:06.144 { 00:14:06.144 "name": "spare", 00:14:06.144 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:06.144 "is_configured": true, 00:14:06.144 "data_offset": 2048, 00:14:06.144 "data_size": 63488 00:14:06.145 }, 00:14:06.145 { 00:14:06.145 "name": "BaseBdev2", 00:14:06.145 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:06.145 "is_configured": true, 00:14:06.145 "data_offset": 2048, 00:14:06.145 "data_size": 63488 00:14:06.145 } 00:14:06.145 ] 00:14:06.145 }' 00:14:06.145 12:06:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:06.145 12:06:13 -- common/autotest_common.sh@10 -- # set +x 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.712 12:06:13 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.712 12:06:14 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:06.712 "name": "raid_bdev1", 00:14:06.712 "uuid": "0892f800-afba-4187-876b-6d890851271c", 00:14:06.712 "strip_size_kb": 0, 00:14:06.712 "state": "online", 00:14:06.712 "raid_level": "raid1", 00:14:06.712 "superblock": true, 00:14:06.712 "num_base_bdevs": 2, 00:14:06.712 "num_base_bdevs_discovered": 2, 00:14:06.712 "num_base_bdevs_operational": 2, 00:14:06.712 "base_bdevs_list": [ 00:14:06.712 { 00:14:06.712 "name": "spare", 00:14:06.712 "uuid": "b91560e7-0e53-5c6c-82bb-5de237460921", 00:14:06.712 "is_configured": true, 00:14:06.712 "data_offset": 2048, 00:14:06.712 "data_size": 63488 00:14:06.712 }, 00:14:06.712 { 00:14:06.712 "name": "BaseBdev2", 00:14:06.712 "uuid": "c07dd71d-60c9-5280-8410-366210fd570c", 00:14:06.712 "is_configured": true, 00:14:06.712 "data_offset": 2048, 00:14:06.712 "data_size": 63488 00:14:06.712 } 00:14:06.712 ] 00:14:06.712 }' 00:14:06.712 12:06:14 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@706 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:14:06.969 12:06:14 -- bdev/bdev_raid.sh@709 -- # killprocess 1257181 00:14:06.969 12:06:14 -- common/autotest_common.sh@926 -- # '[' -z 1257181 ']' 00:14:06.969 12:06:14 -- common/autotest_common.sh@930 -- # kill -0 1257181 00:14:06.969 12:06:14 -- common/autotest_common.sh@931 -- # uname 00:14:07.226 12:06:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:07.226 12:06:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1257181 00:14:07.226 12:06:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:07.226 12:06:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:07.226 12:06:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1257181' 00:14:07.226 killing process with pid 1257181 00:14:07.226 12:06:14 -- common/autotest_common.sh@945 -- # kill 1257181 00:14:07.226 Received shutdown signal, test time was about 60.000000 seconds 00:14:07.226 00:14:07.226 Latency(us) 00:14:07.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.226 =================================================================================================================== 00:14:07.226 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:07.226 [2024-07-25 12:06:14.317732] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:07.226 [2024-07-25 12:06:14.317786] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.226 [2024-07-25 12:06:14.317823] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.226 [2024-07-25 12:06:14.317830] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1da5b60 name raid_bdev1, state offline 00:14:07.226 12:06:14 -- common/autotest_common.sh@950 -- # wait 1257181 00:14:07.226 [2024-07-25 12:06:14.346739] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@711 -- # return 0 00:14:07.484 00:14:07.484 real 0m18.999s 00:14:07.484 user 0m26.658s 00:14:07.484 sys 0m3.982s 00:14:07.484 12:06:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:07.484 12:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:07.484 ************************************ 00:14:07.484 END TEST raid_rebuild_test_sb 00:14:07.484 ************************************ 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true 00:14:07.484 12:06:14 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:07.484 12:06:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:07.484 12:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:07.484 ************************************ 00:14:07.484 START TEST raid_rebuild_test_io 00:14:07.484 ************************************ 00:14:07.484 12:06:14 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 false true 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@544 -- # raid_pid=1260587 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1260587 /var/tmp/spdk-raid.sock 00:14:07.484 12:06:14 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:07.484 12:06:14 -- common/autotest_common.sh@819 -- # '[' -z 1260587 ']' 00:14:07.484 12:06:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:07.484 12:06:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:07.484 12:06:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:07.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:07.484 12:06:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:07.484 12:06:14 -- common/autotest_common.sh@10 -- # set +x 00:14:07.484 [2024-07-25 12:06:14.691028] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:07.484 [2024-07-25 12:06:14.691087] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260587 ] 00:14:07.484 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:07.484 Zero copy mechanism will not be used. 00:14:07.484 [2024-07-25 12:06:14.778027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.741 [2024-07-25 12:06:14.858541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.742 [2024-07-25 12:06:14.913579] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.742 [2024-07-25 12:06:14.913611] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:08.304 12:06:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:08.304 12:06:15 -- common/autotest_common.sh@852 -- # return 0 00:14:08.304 12:06:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:08.304 12:06:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:08.304 12:06:15 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:08.560 BaseBdev1 00:14:08.560 12:06:15 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:08.560 12:06:15 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:08.560 12:06:15 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:08.560 BaseBdev2 00:14:08.560 12:06:15 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:14:08.816 spare_malloc 00:14:08.816 12:06:16 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:09.072 spare_delay 00:14:09.072 12:06:16 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:09.072 [2024-07-25 12:06:16.348765] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:09.072 [2024-07-25 12:06:16.348802] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:09.072 [2024-07-25 12:06:16.348815] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x123f070 00:14:09.072 [2024-07-25 12:06:16.348824] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:09.072 [2024-07-25 12:06:16.349778] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:09.073 [2024-07-25 12:06:16.349799] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:09.073 spare 00:14:09.073 12:06:16 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:14:09.328 [2024-07-25 12:06:16.509199] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.328 [2024-07-25 12:06:16.509950] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:09.328 [2024-07-25 12:06:16.509990] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x12ff2d0 00:14:09.328 [2024-07-25 12:06:16.509998] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:09.328 [2024-07-25 12:06:16.510123] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1302c20 00:14:09.328 [2024-07-25 12:06:16.510198] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12ff2d0 00:14:09.328 [2024-07-25 12:06:16.510204] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12ff2d0 00:14:09.328 [2024-07-25 12:06:16.510279] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.328 12:06:16 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.584 12:06:16 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:09.584 "name": "raid_bdev1", 00:14:09.584 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:09.584 "strip_size_kb": 0, 00:14:09.584 "state": "online", 00:14:09.584 "raid_level": "raid1", 00:14:09.584 "superblock": false, 00:14:09.584 "num_base_bdevs": 2, 00:14:09.584 "num_base_bdevs_discovered": 2, 00:14:09.585 "num_base_bdevs_operational": 2, 00:14:09.585 "base_bdevs_list": [ 00:14:09.585 { 00:14:09.585 "name": "BaseBdev1", 00:14:09.585 "uuid": "733f5909-0c8c-4a3e-b031-05c1402839d5", 00:14:09.585 "is_configured": true, 00:14:09.585 "data_offset": 0, 00:14:09.585 "data_size": 65536 00:14:09.585 }, 00:14:09.585 { 00:14:09.585 "name": "BaseBdev2", 00:14:09.585 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:09.585 "is_configured": true, 00:14:09.585 "data_offset": 0, 00:14:09.585 "data_size": 65536 00:14:09.585 } 00:14:09.585 ] 00:14:09.585 }' 00:14:09.585 12:06:16 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:09.585 12:06:16 -- common/autotest_common.sh@10 -- # set +x 00:14:10.147 12:06:17 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:10.147 12:06:17 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:14:10.147 [2024-07-25 12:06:17.335448] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:10.147 12:06:17 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:14:10.147 12:06:17 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.147 12:06:17 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:10.403 12:06:17 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:14:10.403 12:06:17 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:14:10.403 12:06:17 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:14:10.403 12:06:17 -- bdev/bdev_raid.sh@574 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:10.403 [2024-07-25 12:06:17.614085] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1300dc0 00:14:10.403 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:10.403 Zero copy mechanism will not be used. 00:14:10.403 Running I/O for 60 seconds... 00:14:10.403 [2024-07-25 12:06:17.689253] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:10.403 [2024-07-25 12:06:17.694546] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1300dc0 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:10.661 "name": "raid_bdev1", 00:14:10.661 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:10.661 "strip_size_kb": 0, 00:14:10.661 "state": "online", 00:14:10.661 "raid_level": "raid1", 00:14:10.661 "superblock": false, 00:14:10.661 "num_base_bdevs": 2, 00:14:10.661 "num_base_bdevs_discovered": 1, 00:14:10.661 "num_base_bdevs_operational": 1, 00:14:10.661 "base_bdevs_list": [ 00:14:10.661 { 00:14:10.661 "name": null, 00:14:10.661 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.661 "is_configured": false, 00:14:10.661 "data_offset": 0, 00:14:10.661 "data_size": 65536 00:14:10.661 }, 00:14:10.661 { 00:14:10.661 "name": "BaseBdev2", 00:14:10.661 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:10.661 "is_configured": true, 00:14:10.661 "data_offset": 0, 00:14:10.661 "data_size": 65536 00:14:10.661 } 00:14:10.661 ] 00:14:10.661 }' 00:14:10.661 12:06:17 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:10.661 12:06:17 -- common/autotest_common.sh@10 -- # set +x 00:14:11.227 12:06:18 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:11.485 [2024-07-25 12:06:18.548964] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:11.485 [2024-07-25 12:06:18.548999] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.485 12:06:18 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:14:11.485 [2024-07-25 12:06:18.594986] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x13e96a0 00:14:11.485 [2024-07-25 12:06:18.596675] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:11.485 [2024-07-25 12:06:18.721152] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.485 [2024-07-25 12:06:18.721539] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:11.743 [2024-07-25 12:06:18.941311] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:11.743 [2024-07-25 12:06:18.941503] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:12.000 [2024-07-25 12:06:19.268725] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:12.000 [2024-07-25 12:06:19.269020] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:12.258 [2024-07-25 12:06:19.382481] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:12.516 [2024-07-25 12:06:19.585259] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:12.516 "name": "raid_bdev1", 00:14:12.516 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:12.516 "strip_size_kb": 0, 00:14:12.516 "state": "online", 00:14:12.516 "raid_level": "raid1", 00:14:12.516 "superblock": false, 00:14:12.516 "num_base_bdevs": 2, 00:14:12.516 "num_base_bdevs_discovered": 2, 00:14:12.516 "num_base_bdevs_operational": 2, 00:14:12.516 "process": { 00:14:12.516 "type": "rebuild", 00:14:12.516 "target": "spare", 00:14:12.516 "progress": { 00:14:12.516 "blocks": 14336, 00:14:12.516 "percent": 21 00:14:12.516 } 00:14:12.516 }, 00:14:12.516 "base_bdevs_list": [ 00:14:12.516 { 00:14:12.516 "name": "spare", 00:14:12.516 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:12.516 "is_configured": true, 00:14:12.516 "data_offset": 0, 00:14:12.516 "data_size": 65536 00:14:12.516 }, 00:14:12.516 { 00:14:12.516 "name": "BaseBdev2", 00:14:12.516 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:12.516 "is_configured": true, 00:14:12.516 "data_offset": 0, 00:14:12.516 "data_size": 65536 00:14:12.516 } 00:14:12.516 ] 00:14:12.516 }' 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.516 12:06:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:12.516 [2024-07-25 12:06:19.801348] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:12.774 12:06:19 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.774 12:06:19 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:14:12.774 [2024-07-25 12:06:19.981348] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:13.033 [2024-07-25 12:06:20.111777] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:13.033 [2024-07-25 12:06:20.113941] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.033 [2024-07-25 12:06:20.129697] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1300dc0 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.033 12:06:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.292 12:06:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:13.292 "name": "raid_bdev1", 00:14:13.292 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:13.292 "strip_size_kb": 0, 00:14:13.292 "state": "online", 00:14:13.292 "raid_level": "raid1", 00:14:13.292 "superblock": false, 00:14:13.292 "num_base_bdevs": 2, 00:14:13.292 "num_base_bdevs_discovered": 1, 00:14:13.292 "num_base_bdevs_operational": 1, 00:14:13.292 "base_bdevs_list": [ 00:14:13.292 { 00:14:13.292 "name": null, 00:14:13.292 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.292 "is_configured": false, 00:14:13.292 "data_offset": 0, 00:14:13.292 "data_size": 65536 00:14:13.292 }, 00:14:13.292 { 00:14:13.292 "name": "BaseBdev2", 00:14:13.292 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:13.292 "is_configured": true, 00:14:13.292 "data_offset": 0, 00:14:13.292 "data_size": 65536 00:14:13.292 } 00:14:13.292 ] 00:14:13.292 }' 00:14:13.292 12:06:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:13.292 12:06:20 -- common/autotest_common.sh@10 -- # set +x 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.550 12:06:20 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:13.808 "name": "raid_bdev1", 00:14:13.808 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:13.808 "strip_size_kb": 0, 00:14:13.808 "state": "online", 00:14:13.808 "raid_level": "raid1", 00:14:13.808 "superblock": false, 00:14:13.808 "num_base_bdevs": 2, 00:14:13.808 "num_base_bdevs_discovered": 1, 00:14:13.808 "num_base_bdevs_operational": 1, 00:14:13.808 "base_bdevs_list": [ 00:14:13.808 { 00:14:13.808 "name": null, 00:14:13.808 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.808 "is_configured": false, 00:14:13.808 "data_offset": 0, 00:14:13.808 "data_size": 65536 00:14:13.808 }, 00:14:13.808 { 00:14:13.808 "name": "BaseBdev2", 00:14:13.808 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:13.808 "is_configured": true, 00:14:13.808 "data_offset": 0, 00:14:13.808 "data_size": 65536 00:14:13.808 } 00:14:13.808 ] 00:14:13.808 }' 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:13.808 12:06:21 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:14.067 [2024-07-25 12:06:21.249126] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:14.067 [2024-07-25 12:06:21.249166] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:14.067 12:06:21 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:14:14.067 [2024-07-25 12:06:21.296681] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12ff5b0 00:14:14.067 [2024-07-25 12:06:21.297828] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.326 [2024-07-25 12:06:21.405662] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.326 [2024-07-25 12:06:21.406006] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:14.326 [2024-07-25 12:06:21.613909] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.326 [2024-07-25 12:06:21.614157] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:14.894 [2024-07-25 12:06:22.066485] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.152 [2024-07-25 12:06:22.411925] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:15.152 "name": "raid_bdev1", 00:14:15.152 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:15.152 "strip_size_kb": 0, 00:14:15.152 "state": "online", 00:14:15.152 "raid_level": "raid1", 00:14:15.152 "superblock": false, 00:14:15.152 "num_base_bdevs": 2, 00:14:15.152 "num_base_bdevs_discovered": 2, 00:14:15.152 "num_base_bdevs_operational": 2, 00:14:15.152 "process": { 00:14:15.152 "type": "rebuild", 00:14:15.152 "target": "spare", 00:14:15.152 "progress": { 00:14:15.152 "blocks": 16384, 00:14:15.152 "percent": 25 00:14:15.152 } 00:14:15.152 }, 00:14:15.152 "base_bdevs_list": [ 00:14:15.152 { 00:14:15.152 "name": "spare", 00:14:15.152 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:15.152 "is_configured": true, 00:14:15.152 "data_offset": 0, 00:14:15.152 "data_size": 65536 00:14:15.152 }, 00:14:15.152 { 00:14:15.152 "name": "BaseBdev2", 00:14:15.152 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:15.152 "is_configured": true, 00:14:15.152 "data_offset": 0, 00:14:15.152 "data_size": 65536 00:14:15.152 } 00:14:15.152 ] 00:14:15.152 }' 00:14:15.152 12:06:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:15.409 12:06:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.409 12:06:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:15.409 12:06:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@657 -- # local timeout=321 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:15.410 "name": "raid_bdev1", 00:14:15.410 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:15.410 "strip_size_kb": 0, 00:14:15.410 "state": "online", 00:14:15.410 "raid_level": "raid1", 00:14:15.410 "superblock": false, 00:14:15.410 "num_base_bdevs": 2, 00:14:15.410 "num_base_bdevs_discovered": 2, 00:14:15.410 "num_base_bdevs_operational": 2, 00:14:15.410 "process": { 00:14:15.410 "type": "rebuild", 00:14:15.410 "target": "spare", 00:14:15.410 "progress": { 00:14:15.410 "blocks": 18432, 00:14:15.410 "percent": 28 00:14:15.410 } 00:14:15.410 }, 00:14:15.410 "base_bdevs_list": [ 00:14:15.410 { 00:14:15.410 "name": "spare", 00:14:15.410 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:15.410 "is_configured": true, 00:14:15.410 "data_offset": 0, 00:14:15.410 "data_size": 65536 00:14:15.410 }, 00:14:15.410 { 00:14:15.410 "name": "BaseBdev2", 00:14:15.410 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:15.410 "is_configured": true, 00:14:15.410 "data_offset": 0, 00:14:15.410 "data_size": 65536 00:14:15.410 } 00:14:15.410 ] 00:14:15.410 }' 00:14:15.410 12:06:22 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:15.668 12:06:22 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:15.668 12:06:22 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:15.668 [2024-07-25 12:06:22.730163] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:15.668 [2024-07-25 12:06:22.730605] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:15.668 12:06:22 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:15.668 12:06:22 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:15.668 [2024-07-25 12:06:22.944009] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:15.668 [2024-07-25 12:06:22.944119] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:16.247 [2024-07-25 12:06:23.299514] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:16.530 12:06:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.788 12:06:23 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:16.788 "name": "raid_bdev1", 00:14:16.788 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:16.788 "strip_size_kb": 0, 00:14:16.788 "state": "online", 00:14:16.788 "raid_level": "raid1", 00:14:16.788 "superblock": false, 00:14:16.788 "num_base_bdevs": 2, 00:14:16.788 "num_base_bdevs_discovered": 2, 00:14:16.788 "num_base_bdevs_operational": 2, 00:14:16.788 "process": { 00:14:16.788 "type": "rebuild", 00:14:16.788 "target": "spare", 00:14:16.788 "progress": { 00:14:16.788 "blocks": 34816, 00:14:16.788 "percent": 53 00:14:16.788 } 00:14:16.788 }, 00:14:16.788 "base_bdevs_list": [ 00:14:16.788 { 00:14:16.788 "name": "spare", 00:14:16.788 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:16.788 "is_configured": true, 00:14:16.788 "data_offset": 0, 00:14:16.788 "data_size": 65536 00:14:16.788 }, 00:14:16.788 { 00:14:16.788 "name": "BaseBdev2", 00:14:16.788 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:16.788 "is_configured": true, 00:14:16.788 "data_offset": 0, 00:14:16.788 "data_size": 65536 00:14:16.788 } 00:14:16.788 ] 00:14:16.788 }' 00:14:16.788 12:06:23 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:16.788 12:06:23 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.788 12:06:23 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:16.788 12:06:24 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.788 12:06:24 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:17.724 [2024-07-25 12:06:24.746260] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:17.724 [2024-07-25 12:06:24.746512] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.724 12:06:25 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.982 [2024-07-25 12:06:25.173619] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:17.982 "name": "raid_bdev1", 00:14:17.982 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:17.982 "strip_size_kb": 0, 00:14:17.982 "state": "online", 00:14:17.982 "raid_level": "raid1", 00:14:17.982 "superblock": false, 00:14:17.982 "num_base_bdevs": 2, 00:14:17.982 "num_base_bdevs_discovered": 2, 00:14:17.982 "num_base_bdevs_operational": 2, 00:14:17.982 "process": { 00:14:17.982 "type": "rebuild", 00:14:17.982 "target": "spare", 00:14:17.982 "progress": { 00:14:17.982 "blocks": 57344, 00:14:17.982 "percent": 87 00:14:17.982 } 00:14:17.982 }, 00:14:17.982 "base_bdevs_list": [ 00:14:17.982 { 00:14:17.982 "name": "spare", 00:14:17.982 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:17.982 "is_configured": true, 00:14:17.982 "data_offset": 0, 00:14:17.982 "data_size": 65536 00:14:17.982 }, 00:14:17.982 { 00:14:17.982 "name": "BaseBdev2", 00:14:17.982 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:17.982 "is_configured": true, 00:14:17.982 "data_offset": 0, 00:14:17.982 "data_size": 65536 00:14:17.982 } 00:14:17.982 ] 00:14:17.982 }' 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:17.982 12:06:25 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:18.548 [2024-07-25 12:06:25.608152] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:18.548 [2024-07-25 12:06:25.713371] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:18.548 [2024-07-25 12:06:25.715396] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.114 12:06:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:19.372 "name": "raid_bdev1", 00:14:19.372 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:19.372 "strip_size_kb": 0, 00:14:19.372 "state": "online", 00:14:19.372 "raid_level": "raid1", 00:14:19.372 "superblock": false, 00:14:19.372 "num_base_bdevs": 2, 00:14:19.372 "num_base_bdevs_discovered": 2, 00:14:19.372 "num_base_bdevs_operational": 2, 00:14:19.372 "base_bdevs_list": [ 00:14:19.372 { 00:14:19.372 "name": "spare", 00:14:19.372 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:19.372 "is_configured": true, 00:14:19.372 "data_offset": 0, 00:14:19.372 "data_size": 65536 00:14:19.372 }, 00:14:19.372 { 00:14:19.372 "name": "BaseBdev2", 00:14:19.372 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:19.372 "is_configured": true, 00:14:19.372 "data_offset": 0, 00:14:19.372 "data_size": 65536 00:14:19.372 } 00:14:19.372 ] 00:14:19.372 }' 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@660 -- # break 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.372 12:06:26 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:19.630 "name": "raid_bdev1", 00:14:19.630 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:19.630 "strip_size_kb": 0, 00:14:19.630 "state": "online", 00:14:19.630 "raid_level": "raid1", 00:14:19.630 "superblock": false, 00:14:19.630 "num_base_bdevs": 2, 00:14:19.630 "num_base_bdevs_discovered": 2, 00:14:19.630 "num_base_bdevs_operational": 2, 00:14:19.630 "base_bdevs_list": [ 00:14:19.630 { 00:14:19.630 "name": "spare", 00:14:19.630 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:19.630 "is_configured": true, 00:14:19.630 "data_offset": 0, 00:14:19.630 "data_size": 65536 00:14:19.630 }, 00:14:19.630 { 00:14:19.630 "name": "BaseBdev2", 00:14:19.630 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:19.630 "is_configured": true, 00:14:19.630 "data_offset": 0, 00:14:19.630 "data_size": 65536 00:14:19.630 } 00:14:19.630 ] 00:14:19.630 }' 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:19.630 12:06:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.888 12:06:26 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:19.888 "name": "raid_bdev1", 00:14:19.888 "uuid": "00d29323-1e25-48f7-a1db-d9e9220f7a16", 00:14:19.888 "strip_size_kb": 0, 00:14:19.888 "state": "online", 00:14:19.888 "raid_level": "raid1", 00:14:19.888 "superblock": false, 00:14:19.888 "num_base_bdevs": 2, 00:14:19.888 "num_base_bdevs_discovered": 2, 00:14:19.888 "num_base_bdevs_operational": 2, 00:14:19.888 "base_bdevs_list": [ 00:14:19.888 { 00:14:19.888 "name": "spare", 00:14:19.888 "uuid": "fd32f815-9902-5721-98da-55d6c8f5bae9", 00:14:19.888 "is_configured": true, 00:14:19.888 "data_offset": 0, 00:14:19.888 "data_size": 65536 00:14:19.888 }, 00:14:19.888 { 00:14:19.888 "name": "BaseBdev2", 00:14:19.888 "uuid": "332079a2-e061-4f42-968e-4e6627e306af", 00:14:19.888 "is_configured": true, 00:14:19.888 "data_offset": 0, 00:14:19.888 "data_size": 65536 00:14:19.888 } 00:14:19.888 ] 00:14:19.888 }' 00:14:19.888 12:06:26 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:19.888 12:06:26 -- common/autotest_common.sh@10 -- # set +x 00:14:20.455 12:06:27 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:20.455 [2024-07-25 12:06:27.612904] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.455 [2024-07-25 12:06:27.612932] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.455 00:14:20.455 Latency(us) 00:14:20.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:20.455 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:20.455 raid_bdev1 : 10.00 118.59 355.78 0.00 0.00 11524.15 245.76 113519.75 00:14:20.455 =================================================================================================================== 00:14:20.455 Total : 118.59 355.78 0.00 0.00 11524.15 245.76 113519.75 00:14:20.455 [2024-07-25 12:06:27.643673] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:20.455 [2024-07-25 12:06:27.643698] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.455 [2024-07-25 12:06:27.643745] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.455 [2024-07-25 12:06:27.643753] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12ff2d0 name raid_bdev1, state offline 00:14:20.455 0 00:14:20.455 12:06:27 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:20.455 12:06:27 -- bdev/bdev_raid.sh@671 -- # jq length 00:14:20.714 12:06:27 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:14:20.714 12:06:27 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:14:20.714 12:06:27 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@12 -- # local i 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.714 12:06:27 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:14:20.714 /dev/nbd0 00:14:20.714 12:06:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.972 12:06:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:20.972 12:06:28 -- common/autotest_common.sh@857 -- # local i 00:14:20.972 12:06:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:20.972 12:06:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:20.972 12:06:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:20.972 12:06:28 -- common/autotest_common.sh@861 -- # break 00:14:20.972 12:06:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:20.972 12:06:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:20.972 12:06:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.972 1+0 records in 00:14:20.972 1+0 records out 00:14:20.972 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00026976 s, 15.2 MB/s 00:14:20.972 12:06:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:20.972 12:06:28 -- common/autotest_common.sh@874 -- # size=4096 00:14:20.972 12:06:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:20.972 12:06:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:20.972 12:06:28 -- common/autotest_common.sh@877 -- # return 0 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.972 12:06:28 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:14:20.972 12:06:28 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:14:20.972 12:06:28 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@12 -- # local i 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.972 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.973 12:06:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:20.973 /dev/nbd1 00:14:20.973 12:06:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.973 12:06:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.973 12:06:28 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:20.973 12:06:28 -- common/autotest_common.sh@857 -- # local i 00:14:20.973 12:06:28 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:20.973 12:06:28 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:20.973 12:06:28 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:20.973 12:06:28 -- common/autotest_common.sh@861 -- # break 00:14:20.973 12:06:28 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:20.973 12:06:28 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:20.973 12:06:28 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.973 1+0 records in 00:14:20.973 1+0 records out 00:14:20.973 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000155399 s, 26.4 MB/s 00:14:20.973 12:06:28 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:20.973 12:06:28 -- common/autotest_common.sh@874 -- # size=4096 00:14:20.973 12:06:28 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:20.973 12:06:28 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:20.973 12:06:28 -- common/autotest_common.sh@877 -- # return 0 00:14:20.973 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.973 12:06:28 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.973 12:06:28 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:21.231 12:06:28 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@51 -- # local i 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@41 -- # break 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.231 12:06:28 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@51 -- # local i 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.231 12:06:28 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@41 -- # break 00:14:21.489 12:06:28 -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.489 12:06:28 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:14:21.489 12:06:28 -- bdev/bdev_raid.sh@709 -- # killprocess 1260587 00:14:21.489 12:06:28 -- common/autotest_common.sh@926 -- # '[' -z 1260587 ']' 00:14:21.489 12:06:28 -- common/autotest_common.sh@930 -- # kill -0 1260587 00:14:21.489 12:06:28 -- common/autotest_common.sh@931 -- # uname 00:14:21.489 12:06:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:21.489 12:06:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1260587 00:14:21.489 12:06:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:21.489 12:06:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:21.489 12:06:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1260587' 00:14:21.489 killing process with pid 1260587 00:14:21.489 12:06:28 -- common/autotest_common.sh@945 -- # kill 1260587 00:14:21.489 Received shutdown signal, test time was about 11.077376 seconds 00:14:21.489 00:14:21.489 Latency(us) 00:14:21.489 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.489 =================================================================================================================== 00:14:21.489 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:21.489 [2024-07-25 12:06:28.720341] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.489 12:06:28 -- common/autotest_common.sh@950 -- # wait 1260587 00:14:21.489 [2024-07-25 12:06:28.741620] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:21.748 12:06:28 -- bdev/bdev_raid.sh@711 -- # return 0 00:14:21.748 00:14:21.748 real 0m14.341s 00:14:21.748 user 0m21.019s 00:14:21.748 sys 0m2.097s 00:14:21.748 12:06:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.748 12:06:28 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 ************************************ 00:14:21.748 END TEST raid_rebuild_test_io 00:14:21.748 ************************************ 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true 00:14:21.748 12:06:29 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:21.748 12:06:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:21.748 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:14:21.748 ************************************ 00:14:21.748 START TEST raid_rebuild_test_sb_io 00:14:21.748 ************************************ 00:14:21.748 12:06:29 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 2 true true 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=2 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@544 -- # raid_pid=1262669 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1262669 /var/tmp/spdk-raid.sock 00:14:21.748 12:06:29 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:21.748 12:06:29 -- common/autotest_common.sh@819 -- # '[' -z 1262669 ']' 00:14:21.748 12:06:29 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:21.748 12:06:29 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:21.748 12:06:29 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:21.748 12:06:29 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:21.748 12:06:29 -- common/autotest_common.sh@10 -- # set +x 00:14:22.006 [2024-07-25 12:06:29.084189] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:22.006 [2024-07-25 12:06:29.084239] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1262669 ] 00:14:22.006 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:22.006 Zero copy mechanism will not be used. 00:14:22.006 [2024-07-25 12:06:29.172051] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:22.006 [2024-07-25 12:06:29.262431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.263 [2024-07-25 12:06:29.323312] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.263 [2024-07-25 12:06:29.323340] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:22.827 12:06:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:22.827 12:06:29 -- common/autotest_common.sh@852 -- # return 0 00:14:22.827 12:06:29 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:22.827 12:06:29 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:22.827 12:06:29 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:22.827 BaseBdev1_malloc 00:14:22.827 12:06:30 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.084 [2024-07-25 12:06:30.188910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.084 [2024-07-25 12:06:30.188953] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.084 [2024-07-25 12:06:30.188970] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be7a00 00:14:23.084 [2024-07-25 12:06:30.188978] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.084 [2024-07-25 12:06:30.190250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.084 [2024-07-25 12:06:30.190278] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.084 BaseBdev1 00:14:23.084 12:06:30 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:23.084 12:06:30 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:23.084 12:06:30 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:23.084 BaseBdev2_malloc 00:14:23.085 12:06:30 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:23.341 [2024-07-25 12:06:30.530888] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:23.341 [2024-07-25 12:06:30.530926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.341 [2024-07-25 12:06:30.530960] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be85f0 00:14:23.341 [2024-07-25 12:06:30.530969] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.341 [2024-07-25 12:06:30.532136] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.341 [2024-07-25 12:06:30.532159] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:23.341 BaseBdev2 00:14:23.341 12:06:30 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:14:23.597 spare_malloc 00:14:23.597 12:06:30 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:23.597 spare_delay 00:14:23.597 12:06:30 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:23.853 [2024-07-25 12:06:31.049049] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:23.853 [2024-07-25 12:06:31.049084] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.853 [2024-07-25 12:06:31.049118] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be8f50 00:14:23.853 [2024-07-25 12:06:31.049126] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.853 [2024-07-25 12:06:31.050250] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.853 [2024-07-25 12:06:31.050278] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:23.853 spare 00:14:23.853 12:06:31 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:14:24.110 [2024-07-25 12:06:31.213516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.110 [2024-07-25 12:06:31.214451] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.110 [2024-07-25 12:06:31.214570] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bea5f0 00:14:24.110 [2024-07-25 12:06:31.214579] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.110 [2024-07-25 12:06:31.214716] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bdeab0 00:14:24.110 [2024-07-25 12:06:31.214813] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bea5f0 00:14:24.110 [2024-07-25 12:06:31.214820] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bea5f0 00:14:24.110 [2024-07-25 12:06:31.214889] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:24.110 "name": "raid_bdev1", 00:14:24.110 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:24.110 "strip_size_kb": 0, 00:14:24.110 "state": "online", 00:14:24.110 "raid_level": "raid1", 00:14:24.110 "superblock": true, 00:14:24.110 "num_base_bdevs": 2, 00:14:24.110 "num_base_bdevs_discovered": 2, 00:14:24.110 "num_base_bdevs_operational": 2, 00:14:24.110 "base_bdevs_list": [ 00:14:24.110 { 00:14:24.110 "name": "BaseBdev1", 00:14:24.110 "uuid": "3c6a59cc-3518-5155-90df-5219ea411811", 00:14:24.110 "is_configured": true, 00:14:24.110 "data_offset": 2048, 00:14:24.110 "data_size": 63488 00:14:24.110 }, 00:14:24.110 { 00:14:24.110 "name": "BaseBdev2", 00:14:24.110 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:24.110 "is_configured": true, 00:14:24.110 "data_offset": 2048, 00:14:24.110 "data_size": 63488 00:14:24.110 } 00:14:24.110 ] 00:14:24.110 }' 00:14:24.110 12:06:31 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:24.110 12:06:31 -- common/autotest_common.sh@10 -- # set +x 00:14:24.673 12:06:31 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:24.673 12:06:31 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:14:24.930 [2024-07-25 12:06:32.015665] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:14:24.930 12:06:32 -- bdev/bdev_raid.sh@574 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:25.187 [2024-07-25 12:06:32.298456] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1bec070 00:14:25.187 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:25.188 Zero copy mechanism will not be used. 00:14:25.188 Running I/O for 60 seconds... 00:14:25.188 [2024-07-25 12:06:32.355144] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.188 [2024-07-25 12:06:32.355303] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1bec070 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.188 12:06:32 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.444 12:06:32 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:25.444 "name": "raid_bdev1", 00:14:25.444 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:25.444 "strip_size_kb": 0, 00:14:25.444 "state": "online", 00:14:25.444 "raid_level": "raid1", 00:14:25.444 "superblock": true, 00:14:25.444 "num_base_bdevs": 2, 00:14:25.444 "num_base_bdevs_discovered": 1, 00:14:25.444 "num_base_bdevs_operational": 1, 00:14:25.444 "base_bdevs_list": [ 00:14:25.444 { 00:14:25.444 "name": null, 00:14:25.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.444 "is_configured": false, 00:14:25.444 "data_offset": 2048, 00:14:25.444 "data_size": 63488 00:14:25.444 }, 00:14:25.444 { 00:14:25.444 "name": "BaseBdev2", 00:14:25.444 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:25.444 "is_configured": true, 00:14:25.444 "data_offset": 2048, 00:14:25.444 "data_size": 63488 00:14:25.444 } 00:14:25.444 ] 00:14:25.444 }' 00:14:25.444 12:06:32 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:25.444 12:06:32 -- common/autotest_common.sh@10 -- # set +x 00:14:26.007 12:06:33 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.007 [2024-07-25 12:06:33.186374] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:26.007 [2024-07-25 12:06:33.186414] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.007 12:06:33 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:14:26.007 [2024-07-25 12:06:33.237001] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be1cf0 00:14:26.007 [2024-07-25 12:06:33.238706] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:26.263 [2024-07-25 12:06:33.352717] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.263 [2024-07-25 12:06:33.353120] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:26.263 [2024-07-25 12:06:33.461458] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.263 [2024-07-25 12:06:33.461664] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:26.826 [2024-07-25 12:06:33.905044] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:26.826 [2024-07-25 12:06:33.905286] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:27.084 [2024-07-25 12:06:34.230331] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.084 12:06:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:27.344 "name": "raid_bdev1", 00:14:27.344 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:27.344 "strip_size_kb": 0, 00:14:27.344 "state": "online", 00:14:27.344 "raid_level": "raid1", 00:14:27.344 "superblock": true, 00:14:27.344 "num_base_bdevs": 2, 00:14:27.344 "num_base_bdevs_discovered": 2, 00:14:27.344 "num_base_bdevs_operational": 2, 00:14:27.344 "process": { 00:14:27.344 "type": "rebuild", 00:14:27.344 "target": "spare", 00:14:27.344 "progress": { 00:14:27.344 "blocks": 14336, 00:14:27.344 "percent": 22 00:14:27.344 } 00:14:27.344 }, 00:14:27.344 "base_bdevs_list": [ 00:14:27.344 { 00:14:27.344 "name": "spare", 00:14:27.344 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:27.344 "is_configured": true, 00:14:27.344 "data_offset": 2048, 00:14:27.344 "data_size": 63488 00:14:27.344 }, 00:14:27.344 { 00:14:27.344 "name": "BaseBdev2", 00:14:27.344 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:27.344 "is_configured": true, 00:14:27.344 "data_offset": 2048, 00:14:27.344 "data_size": 63488 00:14:27.344 } 00:14:27.344 ] 00:14:27.344 }' 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:27.344 [2024-07-25 12:06:34.443399] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.344 12:06:34 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:14:27.344 [2024-07-25 12:06:34.628077] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.602 [2024-07-25 12:06:34.657984] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.602 [2024-07-25 12:06:34.670902] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.602 [2024-07-25 12:06:34.692291] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1bec070 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=1 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:27.602 "name": "raid_bdev1", 00:14:27.602 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:27.602 "strip_size_kb": 0, 00:14:27.602 "state": "online", 00:14:27.602 "raid_level": "raid1", 00:14:27.602 "superblock": true, 00:14:27.602 "num_base_bdevs": 2, 00:14:27.602 "num_base_bdevs_discovered": 1, 00:14:27.602 "num_base_bdevs_operational": 1, 00:14:27.602 "base_bdevs_list": [ 00:14:27.602 { 00:14:27.602 "name": null, 00:14:27.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.602 "is_configured": false, 00:14:27.602 "data_offset": 2048, 00:14:27.602 "data_size": 63488 00:14:27.602 }, 00:14:27.602 { 00:14:27.602 "name": "BaseBdev2", 00:14:27.602 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:27.602 "is_configured": true, 00:14:27.602 "data_offset": 2048, 00:14:27.602 "data_size": 63488 00:14:27.602 } 00:14:27.602 ] 00:14:27.602 }' 00:14:27.602 12:06:34 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:27.602 12:06:34 -- common/autotest_common.sh@10 -- # set +x 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.168 12:06:35 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.426 12:06:35 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:28.426 "name": "raid_bdev1", 00:14:28.426 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:28.426 "strip_size_kb": 0, 00:14:28.426 "state": "online", 00:14:28.426 "raid_level": "raid1", 00:14:28.426 "superblock": true, 00:14:28.426 "num_base_bdevs": 2, 00:14:28.426 "num_base_bdevs_discovered": 1, 00:14:28.426 "num_base_bdevs_operational": 1, 00:14:28.426 "base_bdevs_list": [ 00:14:28.426 { 00:14:28.426 "name": null, 00:14:28.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.427 "is_configured": false, 00:14:28.427 "data_offset": 2048, 00:14:28.427 "data_size": 63488 00:14:28.427 }, 00:14:28.427 { 00:14:28.427 "name": "BaseBdev2", 00:14:28.427 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:28.427 "is_configured": true, 00:14:28.427 "data_offset": 2048, 00:14:28.427 "data_size": 63488 00:14:28.427 } 00:14:28.427 ] 00:14:28.427 }' 00:14:28.427 12:06:35 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:28.427 12:06:35 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:28.427 12:06:35 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:28.427 12:06:35 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:28.427 12:06:35 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.685 [2024-07-25 12:06:35.796235] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:28.685 [2024-07-25 12:06:35.796277] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.685 12:06:35 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:14:28.685 [2024-07-25 12:06:35.832354] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1be8880 00:14:28.685 [2024-07-25 12:06:35.833476] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:28.685 [2024-07-25 12:06:35.946308] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.685 [2024-07-25 12:06:35.946737] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:28.944 [2024-07-25 12:06:36.177839] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:28.944 [2024-07-25 12:06:36.178107] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:29.509 [2024-07-25 12:06:36.545523] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.767 12:06:36 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.767 [2024-07-25 12:06:36.879344] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:29.767 [2024-07-25 12:06:36.999937] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:29.767 [2024-07-25 12:06:37.005562] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:29.767 12:06:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:29.767 "name": "raid_bdev1", 00:14:29.767 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:29.767 "strip_size_kb": 0, 00:14:29.767 "state": "online", 00:14:29.767 "raid_level": "raid1", 00:14:29.767 "superblock": true, 00:14:29.767 "num_base_bdevs": 2, 00:14:29.767 "num_base_bdevs_discovered": 2, 00:14:29.767 "num_base_bdevs_operational": 2, 00:14:29.767 "process": { 00:14:29.767 "type": "rebuild", 00:14:29.767 "target": "spare", 00:14:29.767 "progress": { 00:14:29.767 "blocks": 14336, 00:14:29.767 "percent": 22 00:14:29.767 } 00:14:29.767 }, 00:14:29.767 "base_bdevs_list": [ 00:14:29.767 { 00:14:29.767 "name": "spare", 00:14:29.767 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:29.767 "is_configured": true, 00:14:29.767 "data_offset": 2048, 00:14:29.767 "data_size": 63488 00:14:29.767 }, 00:14:29.767 { 00:14:29.767 "name": "BaseBdev2", 00:14:29.767 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:29.767 "is_configured": true, 00:14:29.767 "data_offset": 2048, 00:14:29.767 "data_size": 63488 00:14:29.767 } 00:14:29.767 ] 00:14:29.767 }' 00:14:29.767 12:06:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:29.767 12:06:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.767 12:06:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:14:30.024 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=2 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@644 -- # '[' 2 -gt 2 ']' 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@657 -- # local timeout=336 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.024 12:06:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:30.024 "name": "raid_bdev1", 00:14:30.024 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:30.024 "strip_size_kb": 0, 00:14:30.024 "state": "online", 00:14:30.024 "raid_level": "raid1", 00:14:30.024 "superblock": true, 00:14:30.024 "num_base_bdevs": 2, 00:14:30.024 "num_base_bdevs_discovered": 2, 00:14:30.024 "num_base_bdevs_operational": 2, 00:14:30.024 "process": { 00:14:30.024 "type": "rebuild", 00:14:30.024 "target": "spare", 00:14:30.024 "progress": { 00:14:30.024 "blocks": 18432, 00:14:30.024 "percent": 29 00:14:30.024 } 00:14:30.024 }, 00:14:30.024 "base_bdevs_list": [ 00:14:30.024 { 00:14:30.024 "name": "spare", 00:14:30.024 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:30.024 "is_configured": true, 00:14:30.024 "data_offset": 2048, 00:14:30.024 "data_size": 63488 00:14:30.024 }, 00:14:30.024 { 00:14:30.024 "name": "BaseBdev2", 00:14:30.024 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:30.025 "is_configured": true, 00:14:30.025 "data_offset": 2048, 00:14:30.025 "data_size": 63488 00:14:30.025 } 00:14:30.025 ] 00:14:30.025 }' 00:14:30.025 12:06:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:30.025 12:06:37 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.282 12:06:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:30.282 12:06:37 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.282 12:06:37 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:30.566 [2024-07-25 12:06:37.775632] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:31.146 12:06:38 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:31.405 [2024-07-25 12:06:38.522468] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:31.405 "name": "raid_bdev1", 00:14:31.405 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:31.405 "strip_size_kb": 0, 00:14:31.405 "state": "online", 00:14:31.405 "raid_level": "raid1", 00:14:31.405 "superblock": true, 00:14:31.405 "num_base_bdevs": 2, 00:14:31.405 "num_base_bdevs_discovered": 2, 00:14:31.405 "num_base_bdevs_operational": 2, 00:14:31.405 "process": { 00:14:31.405 "type": "rebuild", 00:14:31.405 "target": "spare", 00:14:31.405 "progress": { 00:14:31.405 "blocks": 40960, 00:14:31.405 "percent": 64 00:14:31.405 } 00:14:31.405 }, 00:14:31.405 "base_bdevs_list": [ 00:14:31.405 { 00:14:31.405 "name": "spare", 00:14:31.405 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:31.405 "is_configured": true, 00:14:31.405 "data_offset": 2048, 00:14:31.405 "data_size": 63488 00:14:31.405 }, 00:14:31.405 { 00:14:31.405 "name": "BaseBdev2", 00:14:31.405 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:31.405 "is_configured": true, 00:14:31.405 "data_offset": 2048, 00:14:31.405 "data_size": 63488 00:14:31.405 } 00:14:31.405 ] 00:14:31.405 }' 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:31.405 12:06:38 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:31.973 [2024-07-25 12:06:39.191797] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:31.973 [2024-07-25 12:06:39.192025] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:32.542 [2024-07-25 12:06:39.632549] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:32.542 "name": "raid_bdev1", 00:14:32.542 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:32.542 "strip_size_kb": 0, 00:14:32.542 "state": "online", 00:14:32.542 "raid_level": "raid1", 00:14:32.542 "superblock": true, 00:14:32.542 "num_base_bdevs": 2, 00:14:32.542 "num_base_bdevs_discovered": 2, 00:14:32.542 "num_base_bdevs_operational": 2, 00:14:32.542 "process": { 00:14:32.542 "type": "rebuild", 00:14:32.542 "target": "spare", 00:14:32.542 "progress": { 00:14:32.542 "blocks": 61440, 00:14:32.542 "percent": 96 00:14:32.542 } 00:14:32.542 }, 00:14:32.542 "base_bdevs_list": [ 00:14:32.542 { 00:14:32.542 "name": "spare", 00:14:32.542 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:32.542 "is_configured": true, 00:14:32.542 "data_offset": 2048, 00:14:32.542 "data_size": 63488 00:14:32.542 }, 00:14:32.542 { 00:14:32.542 "name": "BaseBdev2", 00:14:32.542 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:32.542 "is_configured": true, 00:14:32.542 "data_offset": 2048, 00:14:32.542 "data_size": 63488 00:14:32.542 } 00:14:32.542 ] 00:14:32.542 }' 00:14:32.542 12:06:39 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:32.801 [2024-07-25 12:06:39.859814] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:32.801 12:06:39 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:32.801 12:06:39 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:32.801 12:06:39 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:32.801 12:06:39 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:32.801 [2024-07-25 12:06:39.965121] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:32.801 [2024-07-25 12:06:39.967541] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.738 12:06:40 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:33.998 "name": "raid_bdev1", 00:14:33.998 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:33.998 "strip_size_kb": 0, 00:14:33.998 "state": "online", 00:14:33.998 "raid_level": "raid1", 00:14:33.998 "superblock": true, 00:14:33.998 "num_base_bdevs": 2, 00:14:33.998 "num_base_bdevs_discovered": 2, 00:14:33.998 "num_base_bdevs_operational": 2, 00:14:33.998 "base_bdevs_list": [ 00:14:33.998 { 00:14:33.998 "name": "spare", 00:14:33.998 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:33.998 "is_configured": true, 00:14:33.998 "data_offset": 2048, 00:14:33.998 "data_size": 63488 00:14:33.998 }, 00:14:33.998 { 00:14:33.998 "name": "BaseBdev2", 00:14:33.998 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:33.998 "is_configured": true, 00:14:33.998 "data_offset": 2048, 00:14:33.998 "data_size": 63488 00:14:33.998 } 00:14:33.998 ] 00:14:33.998 }' 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@660 -- # break 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.998 12:06:41 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.256 12:06:41 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:34.256 "name": "raid_bdev1", 00:14:34.256 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:34.256 "strip_size_kb": 0, 00:14:34.256 "state": "online", 00:14:34.256 "raid_level": "raid1", 00:14:34.256 "superblock": true, 00:14:34.256 "num_base_bdevs": 2, 00:14:34.256 "num_base_bdevs_discovered": 2, 00:14:34.256 "num_base_bdevs_operational": 2, 00:14:34.256 "base_bdevs_list": [ 00:14:34.256 { 00:14:34.256 "name": "spare", 00:14:34.256 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:34.256 "is_configured": true, 00:14:34.256 "data_offset": 2048, 00:14:34.256 "data_size": 63488 00:14:34.256 }, 00:14:34.256 { 00:14:34.256 "name": "BaseBdev2", 00:14:34.256 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:34.256 "is_configured": true, 00:14:34.257 "data_offset": 2048, 00:14:34.257 "data_size": 63488 00:14:34.257 } 00:14:34.257 ] 00:14:34.257 }' 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:34.257 12:06:41 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.516 12:06:41 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:34.516 "name": "raid_bdev1", 00:14:34.516 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:34.516 "strip_size_kb": 0, 00:14:34.516 "state": "online", 00:14:34.516 "raid_level": "raid1", 00:14:34.516 "superblock": true, 00:14:34.516 "num_base_bdevs": 2, 00:14:34.516 "num_base_bdevs_discovered": 2, 00:14:34.516 "num_base_bdevs_operational": 2, 00:14:34.516 "base_bdevs_list": [ 00:14:34.516 { 00:14:34.516 "name": "spare", 00:14:34.516 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:34.516 "is_configured": true, 00:14:34.516 "data_offset": 2048, 00:14:34.516 "data_size": 63488 00:14:34.516 }, 00:14:34.516 { 00:14:34.516 "name": "BaseBdev2", 00:14:34.516 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:34.516 "is_configured": true, 00:14:34.516 "data_offset": 2048, 00:14:34.516 "data_size": 63488 00:14:34.516 } 00:14:34.516 ] 00:14:34.516 }' 00:14:34.516 12:06:41 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:34.516 12:06:41 -- common/autotest_common.sh@10 -- # set +x 00:14:34.775 12:06:42 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:35.034 [2024-07-25 12:06:42.181801] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:35.034 [2024-07-25 12:06:42.181827] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:35.034 00:14:35.034 Latency(us) 00:14:35.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:35.034 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:35.034 raid_bdev1 : 9.89 120.49 361.46 0.00 0.00 11790.53 249.32 112152.04 00:14:35.034 =================================================================================================================== 00:14:35.034 Total : 120.49 361.46 0.00 0.00 11790.53 249.32 112152.04 00:14:35.034 [2024-07-25 12:06:42.220658] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:35.034 [2024-07-25 12:06:42.220677] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:35.034 [2024-07-25 12:06:42.220725] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:35.034 [2024-07-25 12:06:42.220734] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bea5f0 name raid_bdev1, state offline 00:14:35.034 0 00:14:35.034 12:06:42 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.034 12:06:42 -- bdev/bdev_raid.sh@671 -- # jq length 00:14:35.293 12:06:42 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:14:35.293 12:06:42 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:14:35.293 12:06:42 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@12 -- # local i 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:14:35.293 /dev/nbd0 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:35.293 12:06:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:35.293 12:06:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:35.293 12:06:42 -- common/autotest_common.sh@857 -- # local i 00:14:35.293 12:06:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:35.293 12:06:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:35.293 12:06:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:35.551 12:06:42 -- common/autotest_common.sh@861 -- # break 00:14:35.551 12:06:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.551 1+0 records in 00:14:35.551 1+0 records out 00:14:35.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277014 s, 14.8 MB/s 00:14:35.551 12:06:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:35.551 12:06:42 -- common/autotest_common.sh@874 -- # size=4096 00:14:35.551 12:06:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:35.551 12:06:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:35.551 12:06:42 -- common/autotest_common.sh@877 -- # return 0 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.551 12:06:42 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:14:35.551 12:06:42 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev2 ']' 00:14:35.551 12:06:42 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev2 /dev/nbd1 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@12 -- # local i 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:35.551 /dev/nbd1 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:35.551 12:06:42 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:35.551 12:06:42 -- common/autotest_common.sh@857 -- # local i 00:14:35.551 12:06:42 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:35.551 12:06:42 -- common/autotest_common.sh@861 -- # break 00:14:35.551 12:06:42 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:35.551 12:06:42 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:35.551 1+0 records in 00:14:35.551 1+0 records out 00:14:35.551 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255267 s, 16.0 MB/s 00:14:35.551 12:06:42 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:35.551 12:06:42 -- common/autotest_common.sh@874 -- # size=4096 00:14:35.551 12:06:42 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:35.551 12:06:42 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:35.551 12:06:42 -- common/autotest_common.sh@877 -- # return 0 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:35.551 12:06:42 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:35.551 12:06:42 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:35.810 12:06:42 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@51 -- # local i 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.810 12:06:42 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@41 -- # break 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@45 -- # return 0 00:14:35.810 12:06:43 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@51 -- # local i 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:35.810 12:06:43 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@41 -- # break 00:14:36.068 12:06:43 -- bdev/nbd_common.sh@45 -- # return 0 00:14:36.068 12:06:43 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:14:36.068 12:06:43 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:14:36.068 12:06:43 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:14:36.068 12:06:43 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:14:36.331 12:06:43 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.331 [2024-07-25 12:06:43.574157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.331 [2024-07-25 12:06:43.574194] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.331 [2024-07-25 12:06:43.574225] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be7c30 00:14:36.331 [2024-07-25 12:06:43.574234] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.331 [2024-07-25 12:06:43.575423] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.331 [2024-07-25 12:06:43.575446] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.331 [2024-07-25 12:06:43.575498] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:36.331 [2024-07-25 12:06:43.575516] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.331 BaseBdev1 00:14:36.331 12:06:43 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:14:36.331 12:06:43 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev2 ']' 00:14:36.331 12:06:43 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev2 00:14:36.589 12:06:43 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.847 [2024-07-25 12:06:43.919070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.847 [2024-07-25 12:06:43.919102] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.847 [2024-07-25 12:06:43.919119] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1be2b10 00:14:36.847 [2024-07-25 12:06:43.919127] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.847 [2024-07-25 12:06:43.919383] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.847 [2024-07-25 12:06:43.919395] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.847 [2024-07-25 12:06:43.919440] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev2 00:14:36.847 [2024-07-25 12:06:43.919448] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev2 (3) greater than existing raid bdev raid_bdev1 (1) 00:14:36.847 [2024-07-25 12:06:43.919455] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:36.847 [2024-07-25 12:06:43.919465] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1be1e40 name raid_bdev1, state configuring 00:14:36.847 [2024-07-25 12:06:43.919486] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.847 BaseBdev2 00:14:36.847 12:06:43 -- bdev/bdev_raid.sh@701 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:14:36.847 12:06:44 -- bdev/bdev_raid.sh@702 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:37.105 [2024-07-25 12:06:44.251961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.105 [2024-07-25 12:06:44.251997] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.105 [2024-07-25 12:06:44.252027] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1caac30 00:14:37.105 [2024-07-25 12:06:44.252036] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.105 [2024-07-25 12:06:44.252319] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.105 [2024-07-25 12:06:44.252336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.105 [2024-07-25 12:06:44.252393] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:14:37.105 [2024-07-25 12:06:44.252406] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:37.105 spare 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=2 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.105 12:06:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.105 [2024-07-25 12:06:44.352709] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x1bded70 00:14:37.105 [2024-07-25 12:06:44.352724] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.105 [2024-07-25 12:06:44.352867] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1ca9bd0 00:14:37.105 [2024-07-25 12:06:44.352973] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1bded70 00:14:37.105 [2024-07-25 12:06:44.352980] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1bded70 00:14:37.105 [2024-07-25 12:06:44.353060] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.363 12:06:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:37.363 "name": "raid_bdev1", 00:14:37.363 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:37.363 "strip_size_kb": 0, 00:14:37.363 "state": "online", 00:14:37.363 "raid_level": "raid1", 00:14:37.363 "superblock": true, 00:14:37.363 "num_base_bdevs": 2, 00:14:37.363 "num_base_bdevs_discovered": 2, 00:14:37.363 "num_base_bdevs_operational": 2, 00:14:37.363 "base_bdevs_list": [ 00:14:37.363 { 00:14:37.363 "name": "spare", 00:14:37.363 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:37.363 "is_configured": true, 00:14:37.363 "data_offset": 2048, 00:14:37.363 "data_size": 63488 00:14:37.363 }, 00:14:37.363 { 00:14:37.363 "name": "BaseBdev2", 00:14:37.363 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:37.363 "is_configured": true, 00:14:37.363 "data_offset": 2048, 00:14:37.363 "data_size": 63488 00:14:37.363 } 00:14:37.363 ] 00:14:37.363 }' 00:14:37.363 12:06:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:37.363 12:06:44 -- common/autotest_common.sh@10 -- # set +x 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.928 12:06:44 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:37.928 "name": "raid_bdev1", 00:14:37.928 "uuid": "bcd7ef1b-c9d5-43c0-9ef4-3e07fe79a67e", 00:14:37.928 "strip_size_kb": 0, 00:14:37.928 "state": "online", 00:14:37.928 "raid_level": "raid1", 00:14:37.928 "superblock": true, 00:14:37.928 "num_base_bdevs": 2, 00:14:37.928 "num_base_bdevs_discovered": 2, 00:14:37.928 "num_base_bdevs_operational": 2, 00:14:37.928 "base_bdevs_list": [ 00:14:37.928 { 00:14:37.928 "name": "spare", 00:14:37.928 "uuid": "05edbaae-5745-5fcc-90c7-686984f09b01", 00:14:37.928 "is_configured": true, 00:14:37.928 "data_offset": 2048, 00:14:37.928 "data_size": 63488 00:14:37.928 }, 00:14:37.928 { 00:14:37.928 "name": "BaseBdev2", 00:14:37.928 "uuid": "5db5cec3-9ef1-5465-946e-0450d6eb8288", 00:14:37.928 "is_configured": true, 00:14:37.928 "data_offset": 2048, 00:14:37.928 "data_size": 63488 00:14:37.928 } 00:14:37.928 ] 00:14:37.928 }' 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@706 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.928 12:06:45 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:38.186 12:06:45 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:14:38.186 12:06:45 -- bdev/bdev_raid.sh@709 -- # killprocess 1262669 00:14:38.186 12:06:45 -- common/autotest_common.sh@926 -- # '[' -z 1262669 ']' 00:14:38.186 12:06:45 -- common/autotest_common.sh@930 -- # kill -0 1262669 00:14:38.186 12:06:45 -- common/autotest_common.sh@931 -- # uname 00:14:38.186 12:06:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:38.186 12:06:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1262669 00:14:38.186 12:06:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:38.186 12:06:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:38.186 12:06:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1262669' 00:14:38.186 killing process with pid 1262669 00:14:38.186 12:06:45 -- common/autotest_common.sh@945 -- # kill 1262669 00:14:38.186 Received shutdown signal, test time was about 13.085653 seconds 00:14:38.186 00:14:38.186 Latency(us) 00:14:38.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:38.186 =================================================================================================================== 00:14:38.186 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:38.186 [2024-07-25 12:06:45.417943] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:38.186 [2024-07-25 12:06:45.417999] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.186 [2024-07-25 12:06:45.418041] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.186 [2024-07-25 12:06:45.418049] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1bded70 name raid_bdev1, state offline 00:14:38.186 12:06:45 -- common/autotest_common.sh@950 -- # wait 1262669 00:14:38.186 [2024-07-25 12:06:45.438834] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:38.444 12:06:45 -- bdev/bdev_raid.sh@711 -- # return 0 00:14:38.444 00:14:38.444 real 0m16.652s 00:14:38.444 user 0m25.186s 00:14:38.444 sys 0m2.709s 00:14:38.444 12:06:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.444 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.444 ************************************ 00:14:38.444 END TEST raid_rebuild_test_sb_io 00:14:38.444 ************************************ 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@734 -- # for n in 2 4 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@735 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false 00:14:38.445 12:06:45 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:38.445 12:06:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:38.445 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.445 ************************************ 00:14:38.445 START TEST raid_rebuild_test 00:14:38.445 ************************************ 00:14:38.445 12:06:45 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false false 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@544 -- # raid_pid=1265174 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1265174 /var/tmp/spdk-raid.sock 00:14:38.445 12:06:45 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:38.445 12:06:45 -- common/autotest_common.sh@819 -- # '[' -z 1265174 ']' 00:14:38.445 12:06:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:38.445 12:06:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:38.445 12:06:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:38.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:38.445 12:06:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:38.445 12:06:45 -- common/autotest_common.sh@10 -- # set +x 00:14:38.703 [2024-07-25 12:06:45.786067] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:38.704 [2024-07-25 12:06:45.786126] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1265174 ] 00:14:38.704 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:38.704 Zero copy mechanism will not be used. 00:14:38.704 [2024-07-25 12:06:45.878374] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:38.704 [2024-07-25 12:06:45.961109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:38.962 [2024-07-25 12:06:46.015890] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:38.962 [2024-07-25 12:06:46.015919] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:39.531 12:06:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:39.531 12:06:46 -- common/autotest_common.sh@852 -- # return 0 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:39.531 BaseBdev1 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:39.531 12:06:46 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:39.790 BaseBdev2 00:14:39.790 12:06:46 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:39.790 12:06:46 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:39.790 12:06:46 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:39.790 BaseBdev3 00:14:39.790 12:06:47 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:39.790 12:06:47 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:14:39.790 12:06:47 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:40.049 BaseBdev4 00:14:40.049 12:06:47 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:14:40.307 spare_malloc 00:14:40.307 12:06:47 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:40.307 spare_delay 00:14:40.307 12:06:47 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:40.566 [2024-07-25 12:06:47.746393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.566 [2024-07-25 12:06:47.746436] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.566 [2024-07-25 12:06:47.746450] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe3d0a0 00:14:40.566 [2024-07-25 12:06:47.746459] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.566 [2024-07-25 12:06:47.747441] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.566 [2024-07-25 12:06:47.747465] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.566 spare 00:14:40.566 12:06:47 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:14:40.824 [2024-07-25 12:06:47.906947] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:40.824 [2024-07-25 12:06:47.907756] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.824 [2024-07-25 12:06:47.907783] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:40.824 [2024-07-25 12:06:47.907804] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:40.824 [2024-07-25 12:06:47.907847] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0xf05940 00:14:40.824 [2024-07-25 12:06:47.907854] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:40.824 [2024-07-25 12:06:47.907992] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe35ff0 00:14:40.824 [2024-07-25 12:06:47.908077] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf05940 00:14:40.824 [2024-07-25 12:06:47.908083] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf05940 00:14:40.824 [2024-07-25 12:06:47.908152] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.824 12:06:47 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:40.824 12:06:48 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:40.825 "name": "raid_bdev1", 00:14:40.825 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:40.825 "strip_size_kb": 0, 00:14:40.825 "state": "online", 00:14:40.825 "raid_level": "raid1", 00:14:40.825 "superblock": false, 00:14:40.825 "num_base_bdevs": 4, 00:14:40.825 "num_base_bdevs_discovered": 4, 00:14:40.825 "num_base_bdevs_operational": 4, 00:14:40.825 "base_bdevs_list": [ 00:14:40.825 { 00:14:40.825 "name": "BaseBdev1", 00:14:40.825 "uuid": "5cc10c01-526a-4b8a-8c06-0098ebb49c35", 00:14:40.825 "is_configured": true, 00:14:40.825 "data_offset": 0, 00:14:40.825 "data_size": 65536 00:14:40.825 }, 00:14:40.825 { 00:14:40.825 "name": "BaseBdev2", 00:14:40.825 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:40.825 "is_configured": true, 00:14:40.825 "data_offset": 0, 00:14:40.825 "data_size": 65536 00:14:40.825 }, 00:14:40.825 { 00:14:40.825 "name": "BaseBdev3", 00:14:40.825 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:40.825 "is_configured": true, 00:14:40.825 "data_offset": 0, 00:14:40.825 "data_size": 65536 00:14:40.825 }, 00:14:40.825 { 00:14:40.825 "name": "BaseBdev4", 00:14:40.825 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:40.825 "is_configured": true, 00:14:40.825 "data_offset": 0, 00:14:40.825 "data_size": 65536 00:14:40.825 } 00:14:40.825 ] 00:14:40.825 }' 00:14:40.825 12:06:48 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:40.825 12:06:48 -- common/autotest_common.sh@10 -- # set +x 00:14:41.392 12:06:48 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:14:41.392 12:06:48 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:14:41.651 [2024-07-25 12:06:48.721194] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:14:41.651 12:06:48 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@12 -- # local i 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.651 12:06:48 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:41.910 [2024-07-25 12:06:49.061979] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xefb9a0 00:14:41.910 /dev/nbd0 00:14:41.910 12:06:49 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:41.910 12:06:49 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:41.910 12:06:49 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:41.910 12:06:49 -- common/autotest_common.sh@857 -- # local i 00:14:41.910 12:06:49 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:41.910 12:06:49 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:41.910 12:06:49 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:41.910 12:06:49 -- common/autotest_common.sh@861 -- # break 00:14:41.910 12:06:49 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:41.910 12:06:49 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:41.910 12:06:49 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:41.910 1+0 records in 00:14:41.910 1+0 records out 00:14:41.910 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00022968 s, 17.8 MB/s 00:14:41.910 12:06:49 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:41.910 12:06:49 -- common/autotest_common.sh@874 -- # size=4096 00:14:41.910 12:06:49 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:41.910 12:06:49 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:41.910 12:06:49 -- common/autotest_common.sh@877 -- # return 0 00:14:41.910 12:06:49 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:41.910 12:06:49 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:41.910 12:06:49 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:14:41.910 12:06:49 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:14:41.910 12:06:49 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:46.093 65536+0 records in 00:14:46.093 65536+0 records out 00:14:46.093 33554432 bytes (34 MB, 32 MiB) copied, 4.27626 s, 7.8 MB/s 00:14:46.093 12:06:53 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:14:46.093 12:06:53 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:46.093 12:06:53 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:46.093 12:06:53 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:46.093 12:06:53 -- bdev/nbd_common.sh@51 -- # local i 00:14:46.093 12:06:53 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:46.352 [2024-07-25 12:06:53.581589] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@41 -- # break 00:14:46.352 12:06:53 -- bdev/nbd_common.sh@45 -- # return 0 00:14:46.352 12:06:53 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:14:46.608 [2024-07-25 12:06:53.742006] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.608 12:06:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.867 12:06:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:46.867 "name": "raid_bdev1", 00:14:46.867 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:46.867 "strip_size_kb": 0, 00:14:46.867 "state": "online", 00:14:46.867 "raid_level": "raid1", 00:14:46.867 "superblock": false, 00:14:46.867 "num_base_bdevs": 4, 00:14:46.867 "num_base_bdevs_discovered": 3, 00:14:46.867 "num_base_bdevs_operational": 3, 00:14:46.867 "base_bdevs_list": [ 00:14:46.867 { 00:14:46.867 "name": null, 00:14:46.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.867 "is_configured": false, 00:14:46.867 "data_offset": 0, 00:14:46.867 "data_size": 65536 00:14:46.867 }, 00:14:46.867 { 00:14:46.867 "name": "BaseBdev2", 00:14:46.867 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:46.867 "is_configured": true, 00:14:46.867 "data_offset": 0, 00:14:46.867 "data_size": 65536 00:14:46.867 }, 00:14:46.867 { 00:14:46.867 "name": "BaseBdev3", 00:14:46.867 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:46.867 "is_configured": true, 00:14:46.867 "data_offset": 0, 00:14:46.867 "data_size": 65536 00:14:46.867 }, 00:14:46.867 { 00:14:46.867 "name": "BaseBdev4", 00:14:46.867 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:46.867 "is_configured": true, 00:14:46.867 "data_offset": 0, 00:14:46.867 "data_size": 65536 00:14:46.867 } 00:14:46.867 ] 00:14:46.867 }' 00:14:46.867 12:06:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:46.867 12:06:53 -- common/autotest_common.sh@10 -- # set +x 00:14:47.170 12:06:54 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:47.433 [2024-07-25 12:06:54.584172] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:47.433 [2024-07-25 12:06:54.584203] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:47.433 [2024-07-25 12:06:54.587855] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe30670 00:14:47.433 [2024-07-25 12:06:54.589510] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.433 12:06:54 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.367 12:06:55 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.625 12:06:55 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:48.625 "name": "raid_bdev1", 00:14:48.625 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:48.625 "strip_size_kb": 0, 00:14:48.625 "state": "online", 00:14:48.625 "raid_level": "raid1", 00:14:48.625 "superblock": false, 00:14:48.625 "num_base_bdevs": 4, 00:14:48.625 "num_base_bdevs_discovered": 4, 00:14:48.625 "num_base_bdevs_operational": 4, 00:14:48.625 "process": { 00:14:48.625 "type": "rebuild", 00:14:48.625 "target": "spare", 00:14:48.625 "progress": { 00:14:48.625 "blocks": 22528, 00:14:48.625 "percent": 34 00:14:48.625 } 00:14:48.625 }, 00:14:48.625 "base_bdevs_list": [ 00:14:48.625 { 00:14:48.625 "name": "spare", 00:14:48.625 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:48.625 "is_configured": true, 00:14:48.625 "data_offset": 0, 00:14:48.625 "data_size": 65536 00:14:48.625 }, 00:14:48.625 { 00:14:48.625 "name": "BaseBdev2", 00:14:48.625 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:48.625 "is_configured": true, 00:14:48.625 "data_offset": 0, 00:14:48.625 "data_size": 65536 00:14:48.625 }, 00:14:48.625 { 00:14:48.625 "name": "BaseBdev3", 00:14:48.625 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:48.625 "is_configured": true, 00:14:48.625 "data_offset": 0, 00:14:48.625 "data_size": 65536 00:14:48.625 }, 00:14:48.625 { 00:14:48.625 "name": "BaseBdev4", 00:14:48.625 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:48.625 "is_configured": true, 00:14:48.625 "data_offset": 0, 00:14:48.625 "data_size": 65536 00:14:48.625 } 00:14:48.625 ] 00:14:48.625 }' 00:14:48.625 12:06:55 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:48.625 12:06:55 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.625 12:06:55 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:48.625 12:06:55 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:48.626 12:06:55 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:14:48.884 [2024-07-25 12:06:55.997630] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:48.884 [2024-07-25 12:06:55.999680] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:48.884 [2024-07-25 12:06:55.999710] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.884 12:06:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.141 12:06:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:49.141 "name": "raid_bdev1", 00:14:49.141 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:49.141 "strip_size_kb": 0, 00:14:49.141 "state": "online", 00:14:49.141 "raid_level": "raid1", 00:14:49.141 "superblock": false, 00:14:49.141 "num_base_bdevs": 4, 00:14:49.141 "num_base_bdevs_discovered": 3, 00:14:49.141 "num_base_bdevs_operational": 3, 00:14:49.141 "base_bdevs_list": [ 00:14:49.141 { 00:14:49.141 "name": null, 00:14:49.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.141 "is_configured": false, 00:14:49.141 "data_offset": 0, 00:14:49.141 "data_size": 65536 00:14:49.141 }, 00:14:49.141 { 00:14:49.141 "name": "BaseBdev2", 00:14:49.141 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:49.141 "is_configured": true, 00:14:49.141 "data_offset": 0, 00:14:49.141 "data_size": 65536 00:14:49.141 }, 00:14:49.141 { 00:14:49.141 "name": "BaseBdev3", 00:14:49.141 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:49.141 "is_configured": true, 00:14:49.141 "data_offset": 0, 00:14:49.141 "data_size": 65536 00:14:49.141 }, 00:14:49.141 { 00:14:49.141 "name": "BaseBdev4", 00:14:49.141 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:49.141 "is_configured": true, 00:14:49.141 "data_offset": 0, 00:14:49.141 "data_size": 65536 00:14:49.141 } 00:14:49.141 ] 00:14:49.141 }' 00:14:49.141 12:06:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:49.141 12:06:56 -- common/autotest_common.sh@10 -- # set +x 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.398 12:06:56 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:49.656 "name": "raid_bdev1", 00:14:49.656 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:49.656 "strip_size_kb": 0, 00:14:49.656 "state": "online", 00:14:49.656 "raid_level": "raid1", 00:14:49.656 "superblock": false, 00:14:49.656 "num_base_bdevs": 4, 00:14:49.656 "num_base_bdevs_discovered": 3, 00:14:49.656 "num_base_bdevs_operational": 3, 00:14:49.656 "base_bdevs_list": [ 00:14:49.656 { 00:14:49.656 "name": null, 00:14:49.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:49.656 "is_configured": false, 00:14:49.656 "data_offset": 0, 00:14:49.656 "data_size": 65536 00:14:49.656 }, 00:14:49.656 { 00:14:49.656 "name": "BaseBdev2", 00:14:49.656 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:49.656 "is_configured": true, 00:14:49.656 "data_offset": 0, 00:14:49.656 "data_size": 65536 00:14:49.656 }, 00:14:49.656 { 00:14:49.656 "name": "BaseBdev3", 00:14:49.656 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:49.656 "is_configured": true, 00:14:49.656 "data_offset": 0, 00:14:49.656 "data_size": 65536 00:14:49.656 }, 00:14:49.656 { 00:14:49.656 "name": "BaseBdev4", 00:14:49.656 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:49.656 "is_configured": true, 00:14:49.656 "data_offset": 0, 00:14:49.656 "data_size": 65536 00:14:49.656 } 00:14:49.656 ] 00:14:49.656 }' 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:49.656 12:06:56 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:14:49.913 [2024-07-25 12:06:57.077938] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:14:49.913 [2024-07-25 12:06:57.077967] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:49.913 [2024-07-25 12:06:57.081546] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf05c80 00:14:49.914 [2024-07-25 12:06:57.082616] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:49.914 12:06:57 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.846 12:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.103 12:06:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:51.103 "name": "raid_bdev1", 00:14:51.103 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:51.103 "strip_size_kb": 0, 00:14:51.103 "state": "online", 00:14:51.104 "raid_level": "raid1", 00:14:51.104 "superblock": false, 00:14:51.104 "num_base_bdevs": 4, 00:14:51.104 "num_base_bdevs_discovered": 4, 00:14:51.104 "num_base_bdevs_operational": 4, 00:14:51.104 "process": { 00:14:51.104 "type": "rebuild", 00:14:51.104 "target": "spare", 00:14:51.104 "progress": { 00:14:51.104 "blocks": 22528, 00:14:51.104 "percent": 34 00:14:51.104 } 00:14:51.104 }, 00:14:51.104 "base_bdevs_list": [ 00:14:51.104 { 00:14:51.104 "name": "spare", 00:14:51.104 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 0, 00:14:51.104 "data_size": 65536 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "BaseBdev2", 00:14:51.104 "uuid": "c15471f6-60b2-4144-987d-0196a7c8969a", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 0, 00:14:51.104 "data_size": 65536 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "BaseBdev3", 00:14:51.104 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 0, 00:14:51.104 "data_size": 65536 00:14:51.104 }, 00:14:51.104 { 00:14:51.104 "name": "BaseBdev4", 00:14:51.104 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:51.104 "is_configured": true, 00:14:51.104 "data_offset": 0, 00:14:51.104 "data_size": 65536 00:14:51.104 } 00:14:51.104 ] 00:14:51.104 }' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:14:51.104 12:06:58 -- bdev/bdev_raid.sh@646 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:51.362 [2024-07-25 12:06:58.506816] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:51.362 [2024-07-25 12:06:58.593715] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0xf05c80 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.362 12:06:58 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.619 12:06:58 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:51.619 "name": "raid_bdev1", 00:14:51.619 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:51.619 "strip_size_kb": 0, 00:14:51.619 "state": "online", 00:14:51.619 "raid_level": "raid1", 00:14:51.619 "superblock": false, 00:14:51.619 "num_base_bdevs": 4, 00:14:51.619 "num_base_bdevs_discovered": 3, 00:14:51.619 "num_base_bdevs_operational": 3, 00:14:51.619 "process": { 00:14:51.619 "type": "rebuild", 00:14:51.619 "target": "spare", 00:14:51.619 "progress": { 00:14:51.619 "blocks": 32768, 00:14:51.619 "percent": 50 00:14:51.619 } 00:14:51.619 }, 00:14:51.619 "base_bdevs_list": [ 00:14:51.619 { 00:14:51.619 "name": "spare", 00:14:51.619 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:51.619 "is_configured": true, 00:14:51.619 "data_offset": 0, 00:14:51.619 "data_size": 65536 00:14:51.619 }, 00:14:51.619 { 00:14:51.619 "name": null, 00:14:51.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.619 "is_configured": false, 00:14:51.619 "data_offset": 0, 00:14:51.619 "data_size": 65536 00:14:51.619 }, 00:14:51.619 { 00:14:51.619 "name": "BaseBdev3", 00:14:51.619 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:51.619 "is_configured": true, 00:14:51.619 "data_offset": 0, 00:14:51.619 "data_size": 65536 00:14:51.619 }, 00:14:51.619 { 00:14:51.619 "name": "BaseBdev4", 00:14:51.619 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:51.619 "is_configured": true, 00:14:51.619 "data_offset": 0, 00:14:51.619 "data_size": 65536 00:14:51.619 } 00:14:51.619 ] 00:14:51.619 }' 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@657 -- # local timeout=357 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.620 12:06:58 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.877 12:06:59 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:51.877 "name": "raid_bdev1", 00:14:51.877 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:51.877 "strip_size_kb": 0, 00:14:51.877 "state": "online", 00:14:51.877 "raid_level": "raid1", 00:14:51.877 "superblock": false, 00:14:51.877 "num_base_bdevs": 4, 00:14:51.877 "num_base_bdevs_discovered": 3, 00:14:51.877 "num_base_bdevs_operational": 3, 00:14:51.877 "process": { 00:14:51.877 "type": "rebuild", 00:14:51.877 "target": "spare", 00:14:51.877 "progress": { 00:14:51.877 "blocks": 38912, 00:14:51.877 "percent": 59 00:14:51.877 } 00:14:51.877 }, 00:14:51.877 "base_bdevs_list": [ 00:14:51.877 { 00:14:51.877 "name": "spare", 00:14:51.877 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:51.877 "is_configured": true, 00:14:51.877 "data_offset": 0, 00:14:51.877 "data_size": 65536 00:14:51.877 }, 00:14:51.877 { 00:14:51.877 "name": null, 00:14:51.877 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:51.877 "is_configured": false, 00:14:51.877 "data_offset": 0, 00:14:51.877 "data_size": 65536 00:14:51.877 }, 00:14:51.877 { 00:14:51.877 "name": "BaseBdev3", 00:14:51.877 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:51.877 "is_configured": true, 00:14:51.877 "data_offset": 0, 00:14:51.877 "data_size": 65536 00:14:51.877 }, 00:14:51.877 { 00:14:51.877 "name": "BaseBdev4", 00:14:51.877 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:51.877 "is_configured": true, 00:14:51.877 "data_offset": 0, 00:14:51.878 "data_size": 65536 00:14:51.878 } 00:14:51.878 ] 00:14:51.878 }' 00:14:51.878 12:06:59 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:51.878 12:06:59 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:51.878 12:06:59 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:51.878 12:06:59 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:51.878 12:06:59 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:52.811 12:07:00 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:53.068 12:07:00 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.068 12:07:00 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.068 12:07:00 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:53.068 "name": "raid_bdev1", 00:14:53.068 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:53.068 "strip_size_kb": 0, 00:14:53.068 "state": "online", 00:14:53.068 "raid_level": "raid1", 00:14:53.068 "superblock": false, 00:14:53.068 "num_base_bdevs": 4, 00:14:53.068 "num_base_bdevs_discovered": 3, 00:14:53.068 "num_base_bdevs_operational": 3, 00:14:53.068 "process": { 00:14:53.068 "type": "rebuild", 00:14:53.068 "target": "spare", 00:14:53.068 "progress": { 00:14:53.068 "blocks": 63488, 00:14:53.068 "percent": 96 00:14:53.068 } 00:14:53.068 }, 00:14:53.068 "base_bdevs_list": [ 00:14:53.068 { 00:14:53.068 "name": "spare", 00:14:53.068 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:53.068 "is_configured": true, 00:14:53.068 "data_offset": 0, 00:14:53.068 "data_size": 65536 00:14:53.068 }, 00:14:53.068 { 00:14:53.068 "name": null, 00:14:53.068 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:53.068 "is_configured": false, 00:14:53.068 "data_offset": 0, 00:14:53.068 "data_size": 65536 00:14:53.068 }, 00:14:53.068 { 00:14:53.068 "name": "BaseBdev3", 00:14:53.068 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:53.068 "is_configured": true, 00:14:53.068 "data_offset": 0, 00:14:53.068 "data_size": 65536 00:14:53.068 }, 00:14:53.068 { 00:14:53.068 "name": "BaseBdev4", 00:14:53.068 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:53.068 "is_configured": true, 00:14:53.068 "data_offset": 0, 00:14:53.068 "data_size": 65536 00:14:53.068 } 00:14:53.068 ] 00:14:53.068 }' 00:14:53.069 12:07:00 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:53.069 [2024-07-25 12:07:00.306177] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:53.069 [2024-07-25 12:07:00.306231] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:53.069 [2024-07-25 12:07:00.306259] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.069 12:07:00 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.069 12:07:00 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:53.069 12:07:00 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.069 12:07:00 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:54.439 "name": "raid_bdev1", 00:14:54.439 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:54.439 "strip_size_kb": 0, 00:14:54.439 "state": "online", 00:14:54.439 "raid_level": "raid1", 00:14:54.439 "superblock": false, 00:14:54.439 "num_base_bdevs": 4, 00:14:54.439 "num_base_bdevs_discovered": 3, 00:14:54.439 "num_base_bdevs_operational": 3, 00:14:54.439 "base_bdevs_list": [ 00:14:54.439 { 00:14:54.439 "name": "spare", 00:14:54.439 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:54.439 "is_configured": true, 00:14:54.439 "data_offset": 0, 00:14:54.439 "data_size": 65536 00:14:54.439 }, 00:14:54.439 { 00:14:54.439 "name": null, 00:14:54.439 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.439 "is_configured": false, 00:14:54.439 "data_offset": 0, 00:14:54.439 "data_size": 65536 00:14:54.439 }, 00:14:54.439 { 00:14:54.439 "name": "BaseBdev3", 00:14:54.439 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:54.439 "is_configured": true, 00:14:54.439 "data_offset": 0, 00:14:54.439 "data_size": 65536 00:14:54.439 }, 00:14:54.439 { 00:14:54.439 "name": "BaseBdev4", 00:14:54.439 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:54.439 "is_configured": true, 00:14:54.439 "data_offset": 0, 00:14:54.439 "data_size": 65536 00:14:54.439 } 00:14:54.439 ] 00:14:54.439 }' 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@660 -- # break 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@185 -- # local target=none 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.439 12:07:01 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:14:54.697 "name": "raid_bdev1", 00:14:54.697 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:54.697 "strip_size_kb": 0, 00:14:54.697 "state": "online", 00:14:54.697 "raid_level": "raid1", 00:14:54.697 "superblock": false, 00:14:54.697 "num_base_bdevs": 4, 00:14:54.697 "num_base_bdevs_discovered": 3, 00:14:54.697 "num_base_bdevs_operational": 3, 00:14:54.697 "base_bdevs_list": [ 00:14:54.697 { 00:14:54.697 "name": "spare", 00:14:54.697 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:54.697 "is_configured": true, 00:14:54.697 "data_offset": 0, 00:14:54.697 "data_size": 65536 00:14:54.697 }, 00:14:54.697 { 00:14:54.697 "name": null, 00:14:54.697 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.697 "is_configured": false, 00:14:54.697 "data_offset": 0, 00:14:54.697 "data_size": 65536 00:14:54.697 }, 00:14:54.697 { 00:14:54.697 "name": "BaseBdev3", 00:14:54.697 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:54.697 "is_configured": true, 00:14:54.697 "data_offset": 0, 00:14:54.697 "data_size": 65536 00:14:54.697 }, 00:14:54.697 { 00:14:54.697 "name": "BaseBdev4", 00:14:54.697 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:54.697 "is_configured": true, 00:14:54.697 "data_offset": 0, 00:14:54.697 "data_size": 65536 00:14:54.697 } 00:14:54.697 ] 00:14:54.697 }' 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.697 12:07:01 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.955 12:07:02 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:14:54.955 "name": "raid_bdev1", 00:14:54.955 "uuid": "3985987a-9f6a-44b4-bd97-87324b192414", 00:14:54.955 "strip_size_kb": 0, 00:14:54.955 "state": "online", 00:14:54.955 "raid_level": "raid1", 00:14:54.955 "superblock": false, 00:14:54.955 "num_base_bdevs": 4, 00:14:54.955 "num_base_bdevs_discovered": 3, 00:14:54.955 "num_base_bdevs_operational": 3, 00:14:54.955 "base_bdevs_list": [ 00:14:54.955 { 00:14:54.955 "name": "spare", 00:14:54.955 "uuid": "4ae1a415-8110-5285-acf8-ee246419d301", 00:14:54.955 "is_configured": true, 00:14:54.955 "data_offset": 0, 00:14:54.955 "data_size": 65536 00:14:54.955 }, 00:14:54.955 { 00:14:54.955 "name": null, 00:14:54.955 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.955 "is_configured": false, 00:14:54.955 "data_offset": 0, 00:14:54.955 "data_size": 65536 00:14:54.955 }, 00:14:54.955 { 00:14:54.955 "name": "BaseBdev3", 00:14:54.955 "uuid": "63f20984-737d-4d29-8a4b-a8aa90cdd3dd", 00:14:54.955 "is_configured": true, 00:14:54.955 "data_offset": 0, 00:14:54.955 "data_size": 65536 00:14:54.955 }, 00:14:54.955 { 00:14:54.955 "name": "BaseBdev4", 00:14:54.955 "uuid": "63aa864a-ce16-49d9-bd52-d0da787f3aad", 00:14:54.955 "is_configured": true, 00:14:54.955 "data_offset": 0, 00:14:54.955 "data_size": 65536 00:14:54.955 } 00:14:54.955 ] 00:14:54.955 }' 00:14:54.955 12:07:02 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:14:54.955 12:07:02 -- common/autotest_common.sh@10 -- # set +x 00:14:55.521 12:07:02 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:55.521 [2024-07-25 12:07:02.676896] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:55.521 [2024-07-25 12:07:02.676924] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:55.521 [2024-07-25 12:07:02.676974] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:55.521 [2024-07-25 12:07:02.677019] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:55.521 [2024-07-25 12:07:02.677027] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf05940 name raid_bdev1, state offline 00:14:55.521 12:07:02 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:55.521 12:07:02 -- bdev/bdev_raid.sh@671 -- # jq length 00:14:55.779 12:07:02 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:14:55.779 12:07:02 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:14:55.779 12:07:02 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@12 -- # local i 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.779 12:07:02 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:55.779 /dev/nbd0 00:14:55.779 12:07:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:55.779 12:07:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:55.779 12:07:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:14:55.779 12:07:03 -- common/autotest_common.sh@857 -- # local i 00:14:55.779 12:07:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:55.779 12:07:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:55.779 12:07:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:14:55.779 12:07:03 -- common/autotest_common.sh@861 -- # break 00:14:55.779 12:07:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:55.779 12:07:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:55.779 12:07:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:55.779 1+0 records in 00:14:55.779 1+0 records out 00:14:55.779 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220085 s, 18.6 MB/s 00:14:55.779 12:07:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:55.779 12:07:03 -- common/autotest_common.sh@874 -- # size=4096 00:14:55.779 12:07:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:55.779 12:07:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:55.779 12:07:03 -- common/autotest_common.sh@877 -- # return 0 00:14:55.779 12:07:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:55.779 12:07:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:55.779 12:07:03 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:14:56.038 /dev/nbd1 00:14:56.038 12:07:03 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:56.038 12:07:03 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:56.038 12:07:03 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:14:56.038 12:07:03 -- common/autotest_common.sh@857 -- # local i 00:14:56.038 12:07:03 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:14:56.038 12:07:03 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:14:56.038 12:07:03 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:14:56.038 12:07:03 -- common/autotest_common.sh@861 -- # break 00:14:56.038 12:07:03 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:14:56.038 12:07:03 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:14:56.038 12:07:03 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:56.038 1+0 records in 00:14:56.038 1+0 records out 00:14:56.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000309227 s, 13.2 MB/s 00:14:56.038 12:07:03 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:56.038 12:07:03 -- common/autotest_common.sh@874 -- # size=4096 00:14:56.038 12:07:03 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:14:56.038 12:07:03 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:14:56.038 12:07:03 -- common/autotest_common.sh@877 -- # return 0 00:14:56.038 12:07:03 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:56.038 12:07:03 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:56.038 12:07:03 -- bdev/bdev_raid.sh@688 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:56.294 12:07:03 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:14:56.294 12:07:03 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@51 -- # local i 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@41 -- # break 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:56.295 12:07:03 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@41 -- # break 00:14:56.552 12:07:03 -- bdev/nbd_common.sh@45 -- # return 0 00:14:56.552 12:07:03 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:14:56.552 12:07:03 -- bdev/bdev_raid.sh@709 -- # killprocess 1265174 00:14:56.552 12:07:03 -- common/autotest_common.sh@926 -- # '[' -z 1265174 ']' 00:14:56.552 12:07:03 -- common/autotest_common.sh@930 -- # kill -0 1265174 00:14:56.552 12:07:03 -- common/autotest_common.sh@931 -- # uname 00:14:56.552 12:07:03 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:56.552 12:07:03 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1265174 00:14:56.552 12:07:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:56.552 12:07:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:56.552 12:07:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1265174' 00:14:56.552 killing process with pid 1265174 00:14:56.552 12:07:03 -- common/autotest_common.sh@945 -- # kill 1265174 00:14:56.552 Received shutdown signal, test time was about 60.000000 seconds 00:14:56.552 00:14:56.552 Latency(us) 00:14:56.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:56.552 =================================================================================================================== 00:14:56.552 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:56.552 [2024-07-25 12:07:03.766283] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.552 12:07:03 -- common/autotest_common.sh@950 -- # wait 1265174 00:14:56.552 [2024-07-25 12:07:03.816518] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@711 -- # return 0 00:14:56.810 00:14:56.810 real 0m18.318s 00:14:56.810 user 0m23.967s 00:14:56.810 sys 0m4.020s 00:14:56.810 12:07:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:56.810 12:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:56.810 ************************************ 00:14:56.810 END TEST raid_rebuild_test 00:14:56.810 ************************************ 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@736 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false 00:14:56.810 12:07:04 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:14:56.810 12:07:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:56.810 12:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:56.810 ************************************ 00:14:56.810 START TEST raid_rebuild_test_sb 00:14:56.810 ************************************ 00:14:56.810 12:07:04 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true false 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@520 -- # local background_io=false 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@544 -- # raid_pid=1267883 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1267883 /var/tmp/spdk-raid.sock 00:14:56.810 12:07:04 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:56.810 12:07:04 -- common/autotest_common.sh@819 -- # '[' -z 1267883 ']' 00:14:56.810 12:07:04 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:56.810 12:07:04 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:56.810 12:07:04 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:56.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:56.810 12:07:04 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:56.810 12:07:04 -- common/autotest_common.sh@10 -- # set +x 00:14:57.068 [2024-07-25 12:07:04.146727] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:14:57.069 [2024-07-25 12:07:04.146774] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1267883 ] 00:14:57.069 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:57.069 Zero copy mechanism will not be used. 00:14:57.069 [2024-07-25 12:07:04.231084] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.069 [2024-07-25 12:07:04.314389] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.069 [2024-07-25 12:07:04.371429] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.069 [2024-07-25 12:07:04.371460] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.001 12:07:04 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:58.001 12:07:04 -- common/autotest_common.sh@852 -- # return 0 00:14:58.001 12:07:04 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:58.001 12:07:04 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:58.001 12:07:04 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:58.001 BaseBdev1_malloc 00:14:58.001 12:07:05 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:58.001 [2024-07-25 12:07:05.290969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:58.001 [2024-07-25 12:07:05.291008] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.001 [2024-07-25 12:07:05.291025] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2540a00 00:14:58.001 [2024-07-25 12:07:05.291034] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.001 [2024-07-25 12:07:05.292280] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.001 [2024-07-25 12:07:05.292301] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:58.001 BaseBdev1 00:14:58.001 12:07:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:58.001 12:07:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:58.001 12:07:05 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:58.259 BaseBdev2_malloc 00:14:58.259 12:07:05 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:58.516 [2024-07-25 12:07:05.636897] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:58.516 [2024-07-25 12:07:05.636926] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.516 [2024-07-25 12:07:05.636941] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25415f0 00:14:58.516 [2024-07-25 12:07:05.636949] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.516 [2024-07-25 12:07:05.637852] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.516 [2024-07-25 12:07:05.637872] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:58.516 BaseBdev2 00:14:58.516 12:07:05 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:58.516 12:07:05 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:58.516 12:07:05 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:58.774 BaseBdev3_malloc 00:14:58.774 12:07:05 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:58.774 [2024-07-25 12:07:05.993652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:58.774 [2024-07-25 12:07:05.993688] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.774 [2024-07-25 12:07:05.993702] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x260cab0 00:14:58.774 [2024-07-25 12:07:05.993711] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.774 [2024-07-25 12:07:05.994830] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.774 [2024-07-25 12:07:05.994853] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:58.774 BaseBdev3 00:14:58.774 12:07:06 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:14:58.774 12:07:06 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:14:58.774 12:07:06 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:59.031 BaseBdev4_malloc 00:14:59.031 12:07:06 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:59.031 [2024-07-25 12:07:06.326229] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:59.031 [2024-07-25 12:07:06.326265] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.031 [2024-07-25 12:07:06.326283] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2541d80 00:14:59.031 [2024-07-25 12:07:06.326306] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.031 [2024-07-25 12:07:06.327482] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.031 [2024-07-25 12:07:06.327503] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:59.031 BaseBdev4 00:14:59.288 12:07:06 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:14:59.288 spare_malloc 00:14:59.288 12:07:06 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:59.546 spare_delay 00:14:59.546 12:07:06 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:14:59.546 [2024-07-25 12:07:06.828078] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:59.546 [2024-07-25 12:07:06.828115] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.546 [2024-07-25 12:07:06.828133] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2537630 00:14:59.546 [2024-07-25 12:07:06.828142] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.546 [2024-07-25 12:07:06.829341] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.546 [2024-07-25 12:07:06.829362] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:59.546 spare 00:14:59.546 12:07:06 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:14:59.803 [2024-07-25 12:07:06.984519] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:59.803 [2024-07-25 12:07:06.985456] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:59.803 [2024-07-25 12:07:06.985492] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:59.803 [2024-07-25 12:07:06.985520] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:59.803 [2024-07-25 12:07:06.985654] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x253b3c0 00:14:59.803 [2024-07-25 12:07:06.985661] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:59.803 [2024-07-25 12:07:06.985800] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x253b2d0 00:14:59.803 [2024-07-25 12:07:06.985898] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x253b3c0 00:14:59.803 [2024-07-25 12:07:06.985905] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x253b3c0 00:14:59.803 [2024-07-25 12:07:06.985971] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.803 12:07:06 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:59.803 12:07:06 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:14:59.803 12:07:06 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@125 -- # local tmp 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:59.803 12:07:07 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.060 12:07:07 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:00.060 "name": "raid_bdev1", 00:15:00.060 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:00.060 "strip_size_kb": 0, 00:15:00.061 "state": "online", 00:15:00.061 "raid_level": "raid1", 00:15:00.061 "superblock": true, 00:15:00.061 "num_base_bdevs": 4, 00:15:00.061 "num_base_bdevs_discovered": 4, 00:15:00.061 "num_base_bdevs_operational": 4, 00:15:00.061 "base_bdevs_list": [ 00:15:00.061 { 00:15:00.061 "name": "BaseBdev1", 00:15:00.061 "uuid": "c6a5d834-ad14-5f67-a9bc-49c0f27bee1f", 00:15:00.061 "is_configured": true, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": "BaseBdev2", 00:15:00.061 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:00.061 "is_configured": true, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": "BaseBdev3", 00:15:00.061 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:00.061 "is_configured": true, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 }, 00:15:00.061 { 00:15:00.061 "name": "BaseBdev4", 00:15:00.061 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:00.061 "is_configured": true, 00:15:00.061 "data_offset": 2048, 00:15:00.061 "data_size": 63488 00:15:00.061 } 00:15:00.061 ] 00:15:00.061 }' 00:15:00.061 12:07:07 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:00.061 12:07:07 -- common/autotest_common.sh@10 -- # set +x 00:15:00.627 12:07:07 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:00.627 12:07:07 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:15:00.627 [2024-07-25 12:07:07.806749] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:00.627 12:07:07 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:15:00.627 12:07:07 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.627 12:07:07 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:00.884 12:07:08 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:15:00.884 12:07:08 -- bdev/bdev_raid.sh@572 -- # '[' false = true ']' 00:15:00.884 12:07:08 -- bdev/bdev_raid.sh@576 -- # local write_unit_size 00:15:00.884 12:07:08 -- bdev/bdev_raid.sh@579 -- # nbd_start_disks /var/tmp/spdk-raid.sock raid_bdev1 /dev/nbd0 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@12 -- # local i 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.884 12:07:08 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:00.885 [2024-07-25 12:07:08.155603] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x253c4f0 00:15:00.885 /dev/nbd0 00:15:00.885 12:07:08 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.885 12:07:08 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.885 12:07:08 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:00.885 12:07:08 -- common/autotest_common.sh@857 -- # local i 00:15:00.885 12:07:08 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:00.885 12:07:08 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:00.885 12:07:08 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:00.885 12:07:08 -- common/autotest_common.sh@861 -- # break 00:15:00.885 12:07:08 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:00.885 12:07:08 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:00.885 12:07:08 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.885 1+0 records in 00:15:00.885 1+0 records out 00:15:00.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260797 s, 15.7 MB/s 00:15:01.142 12:07:08 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:01.142 12:07:08 -- common/autotest_common.sh@874 -- # size=4096 00:15:01.142 12:07:08 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:01.142 12:07:08 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:01.142 12:07:08 -- common/autotest_common.sh@877 -- # return 0 00:15:01.142 12:07:08 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:01.142 12:07:08 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:01.142 12:07:08 -- bdev/bdev_raid.sh@580 -- # '[' raid1 = raid5f ']' 00:15:01.142 12:07:08 -- bdev/bdev_raid.sh@584 -- # write_unit_size=1 00:15:01.142 12:07:08 -- bdev/bdev_raid.sh@586 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:06.434 63488+0 records in 00:15:06.434 63488+0 records out 00:15:06.434 32505856 bytes (33 MB, 31 MiB) copied, 4.65543 s, 7.0 MB/s 00:15:06.434 12:07:12 -- bdev/bdev_raid.sh@587 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@51 -- # local i 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:06.435 12:07:12 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:06.435 [2024-07-25 12:07:13.041169] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@41 -- # break 00:15:06.435 12:07:13 -- bdev/nbd_common.sh@45 -- # return 0 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:15:06.435 [2024-07-25 12:07:13.198920] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:06.435 "name": "raid_bdev1", 00:15:06.435 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:06.435 "strip_size_kb": 0, 00:15:06.435 "state": "online", 00:15:06.435 "raid_level": "raid1", 00:15:06.435 "superblock": true, 00:15:06.435 "num_base_bdevs": 4, 00:15:06.435 "num_base_bdevs_discovered": 3, 00:15:06.435 "num_base_bdevs_operational": 3, 00:15:06.435 "base_bdevs_list": [ 00:15:06.435 { 00:15:06.435 "name": null, 00:15:06.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:06.435 "is_configured": false, 00:15:06.435 "data_offset": 2048, 00:15:06.435 "data_size": 63488 00:15:06.435 }, 00:15:06.435 { 00:15:06.435 "name": "BaseBdev2", 00:15:06.435 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:06.435 "is_configured": true, 00:15:06.435 "data_offset": 2048, 00:15:06.435 "data_size": 63488 00:15:06.435 }, 00:15:06.435 { 00:15:06.435 "name": "BaseBdev3", 00:15:06.435 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:06.435 "is_configured": true, 00:15:06.435 "data_offset": 2048, 00:15:06.435 "data_size": 63488 00:15:06.435 }, 00:15:06.435 { 00:15:06.435 "name": "BaseBdev4", 00:15:06.435 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:06.435 "is_configured": true, 00:15:06.435 "data_offset": 2048, 00:15:06.435 "data_size": 63488 00:15:06.435 } 00:15:06.435 ] 00:15:06.435 }' 00:15:06.435 12:07:13 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:06.435 12:07:13 -- common/autotest_common.sh@10 -- # set +x 00:15:06.693 12:07:13 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:06.950 [2024-07-25 12:07:14.037070] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:06.950 [2024-07-25 12:07:14.037094] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.950 [2024-07-25 12:07:14.040718] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x253c4f0 00:15:06.950 [2024-07-25 12:07:14.042355] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.950 12:07:14 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:15:07.882 12:07:15 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.883 12:07:15 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:08.141 "name": "raid_bdev1", 00:15:08.141 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:08.141 "strip_size_kb": 0, 00:15:08.141 "state": "online", 00:15:08.141 "raid_level": "raid1", 00:15:08.141 "superblock": true, 00:15:08.141 "num_base_bdevs": 4, 00:15:08.141 "num_base_bdevs_discovered": 4, 00:15:08.141 "num_base_bdevs_operational": 4, 00:15:08.141 "process": { 00:15:08.141 "type": "rebuild", 00:15:08.141 "target": "spare", 00:15:08.141 "progress": { 00:15:08.141 "blocks": 22528, 00:15:08.141 "percent": 35 00:15:08.141 } 00:15:08.141 }, 00:15:08.141 "base_bdevs_list": [ 00:15:08.141 { 00:15:08.141 "name": "spare", 00:15:08.141 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:08.141 "is_configured": true, 00:15:08.141 "data_offset": 2048, 00:15:08.141 "data_size": 63488 00:15:08.141 }, 00:15:08.141 { 00:15:08.141 "name": "BaseBdev2", 00:15:08.141 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:08.141 "is_configured": true, 00:15:08.141 "data_offset": 2048, 00:15:08.141 "data_size": 63488 00:15:08.141 }, 00:15:08.141 { 00:15:08.141 "name": "BaseBdev3", 00:15:08.141 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:08.141 "is_configured": true, 00:15:08.141 "data_offset": 2048, 00:15:08.141 "data_size": 63488 00:15:08.141 }, 00:15:08.141 { 00:15:08.141 "name": "BaseBdev4", 00:15:08.141 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:08.141 "is_configured": true, 00:15:08.141 "data_offset": 2048, 00:15:08.141 "data_size": 63488 00:15:08.141 } 00:15:08.141 ] 00:15:08.141 }' 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.141 12:07:15 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:15:08.141 [2024-07-25 12:07:15.450392] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:08.399 [2024-07-25 12:07:15.452516] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:08.399 [2024-07-25 12:07:15.452547] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:08.399 "name": "raid_bdev1", 00:15:08.399 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:08.399 "strip_size_kb": 0, 00:15:08.399 "state": "online", 00:15:08.399 "raid_level": "raid1", 00:15:08.399 "superblock": true, 00:15:08.399 "num_base_bdevs": 4, 00:15:08.399 "num_base_bdevs_discovered": 3, 00:15:08.399 "num_base_bdevs_operational": 3, 00:15:08.399 "base_bdevs_list": [ 00:15:08.399 { 00:15:08.399 "name": null, 00:15:08.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.399 "is_configured": false, 00:15:08.399 "data_offset": 2048, 00:15:08.399 "data_size": 63488 00:15:08.399 }, 00:15:08.399 { 00:15:08.399 "name": "BaseBdev2", 00:15:08.399 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:08.399 "is_configured": true, 00:15:08.399 "data_offset": 2048, 00:15:08.399 "data_size": 63488 00:15:08.399 }, 00:15:08.399 { 00:15:08.399 "name": "BaseBdev3", 00:15:08.399 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:08.399 "is_configured": true, 00:15:08.399 "data_offset": 2048, 00:15:08.399 "data_size": 63488 00:15:08.399 }, 00:15:08.399 { 00:15:08.399 "name": "BaseBdev4", 00:15:08.399 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:08.399 "is_configured": true, 00:15:08.399 "data_offset": 2048, 00:15:08.399 "data_size": 63488 00:15:08.399 } 00:15:08.399 ] 00:15:08.399 }' 00:15:08.399 12:07:15 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:08.399 12:07:15 -- common/autotest_common.sh@10 -- # set +x 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:08.965 12:07:16 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:09.224 "name": "raid_bdev1", 00:15:09.224 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:09.224 "strip_size_kb": 0, 00:15:09.224 "state": "online", 00:15:09.224 "raid_level": "raid1", 00:15:09.224 "superblock": true, 00:15:09.224 "num_base_bdevs": 4, 00:15:09.224 "num_base_bdevs_discovered": 3, 00:15:09.224 "num_base_bdevs_operational": 3, 00:15:09.224 "base_bdevs_list": [ 00:15:09.224 { 00:15:09.224 "name": null, 00:15:09.224 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:09.224 "is_configured": false, 00:15:09.224 "data_offset": 2048, 00:15:09.224 "data_size": 63488 00:15:09.224 }, 00:15:09.224 { 00:15:09.224 "name": "BaseBdev2", 00:15:09.224 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:09.224 "is_configured": true, 00:15:09.224 "data_offset": 2048, 00:15:09.224 "data_size": 63488 00:15:09.224 }, 00:15:09.224 { 00:15:09.224 "name": "BaseBdev3", 00:15:09.224 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:09.224 "is_configured": true, 00:15:09.224 "data_offset": 2048, 00:15:09.224 "data_size": 63488 00:15:09.224 }, 00:15:09.224 { 00:15:09.224 "name": "BaseBdev4", 00:15:09.224 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:09.224 "is_configured": true, 00:15:09.224 "data_offset": 2048, 00:15:09.224 "data_size": 63488 00:15:09.224 } 00:15:09.224 ] 00:15:09.224 }' 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:09.224 12:07:16 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:09.224 [2024-07-25 12:07:16.507056] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:09.225 [2024-07-25 12:07:16.507079] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:09.225 [2024-07-25 12:07:16.510636] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x253c4f0 00:15:09.225 [2024-07-25 12:07:16.511717] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:09.483 12:07:16 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:10.417 "name": "raid_bdev1", 00:15:10.417 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:10.417 "strip_size_kb": 0, 00:15:10.417 "state": "online", 00:15:10.417 "raid_level": "raid1", 00:15:10.417 "superblock": true, 00:15:10.417 "num_base_bdevs": 4, 00:15:10.417 "num_base_bdevs_discovered": 4, 00:15:10.417 "num_base_bdevs_operational": 4, 00:15:10.417 "process": { 00:15:10.417 "type": "rebuild", 00:15:10.417 "target": "spare", 00:15:10.417 "progress": { 00:15:10.417 "blocks": 22528, 00:15:10.417 "percent": 35 00:15:10.417 } 00:15:10.417 }, 00:15:10.417 "base_bdevs_list": [ 00:15:10.417 { 00:15:10.417 "name": "spare", 00:15:10.417 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:10.417 "is_configured": true, 00:15:10.417 "data_offset": 2048, 00:15:10.417 "data_size": 63488 00:15:10.417 }, 00:15:10.417 { 00:15:10.417 "name": "BaseBdev2", 00:15:10.417 "uuid": "d8df4c68-6b9f-596f-9be2-b8050d73bd01", 00:15:10.417 "is_configured": true, 00:15:10.417 "data_offset": 2048, 00:15:10.417 "data_size": 63488 00:15:10.417 }, 00:15:10.417 { 00:15:10.417 "name": "BaseBdev3", 00:15:10.417 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:10.417 "is_configured": true, 00:15:10.417 "data_offset": 2048, 00:15:10.417 "data_size": 63488 00:15:10.417 }, 00:15:10.417 { 00:15:10.417 "name": "BaseBdev4", 00:15:10.417 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:10.417 "is_configured": true, 00:15:10.417 "data_offset": 2048, 00:15:10.417 "data_size": 63488 00:15:10.417 } 00:15:10.417 ] 00:15:10.417 }' 00:15:10.417 12:07:17 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:15:10.675 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:15:10.675 12:07:17 -- bdev/bdev_raid.sh@646 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:10.675 [2024-07-25 12:07:17.948154] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:10.933 [2024-07-25 12:07:18.022620] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x253c4f0 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.933 12:07:18 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:11.191 "name": "raid_bdev1", 00:15:11.191 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:11.191 "strip_size_kb": 0, 00:15:11.191 "state": "online", 00:15:11.191 "raid_level": "raid1", 00:15:11.191 "superblock": true, 00:15:11.191 "num_base_bdevs": 4, 00:15:11.191 "num_base_bdevs_discovered": 3, 00:15:11.191 "num_base_bdevs_operational": 3, 00:15:11.191 "process": { 00:15:11.191 "type": "rebuild", 00:15:11.191 "target": "spare", 00:15:11.191 "progress": { 00:15:11.191 "blocks": 34816, 00:15:11.191 "percent": 54 00:15:11.191 } 00:15:11.191 }, 00:15:11.191 "base_bdevs_list": [ 00:15:11.191 { 00:15:11.191 "name": "spare", 00:15:11.191 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:11.191 "is_configured": true, 00:15:11.191 "data_offset": 2048, 00:15:11.191 "data_size": 63488 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "name": null, 00:15:11.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.191 "is_configured": false, 00:15:11.191 "data_offset": 2048, 00:15:11.191 "data_size": 63488 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "name": "BaseBdev3", 00:15:11.191 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:11.191 "is_configured": true, 00:15:11.191 "data_offset": 2048, 00:15:11.191 "data_size": 63488 00:15:11.191 }, 00:15:11.191 { 00:15:11.191 "name": "BaseBdev4", 00:15:11.191 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:11.191 "is_configured": true, 00:15:11.191 "data_offset": 2048, 00:15:11.191 "data_size": 63488 00:15:11.191 } 00:15:11.191 ] 00:15:11.191 }' 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@657 -- # local timeout=377 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:11.191 12:07:18 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.449 12:07:18 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:11.449 "name": "raid_bdev1", 00:15:11.449 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:11.449 "strip_size_kb": 0, 00:15:11.449 "state": "online", 00:15:11.449 "raid_level": "raid1", 00:15:11.449 "superblock": true, 00:15:11.449 "num_base_bdevs": 4, 00:15:11.449 "num_base_bdevs_discovered": 3, 00:15:11.449 "num_base_bdevs_operational": 3, 00:15:11.449 "process": { 00:15:11.449 "type": "rebuild", 00:15:11.449 "target": "spare", 00:15:11.449 "progress": { 00:15:11.449 "blocks": 40960, 00:15:11.449 "percent": 64 00:15:11.449 } 00:15:11.449 }, 00:15:11.449 "base_bdevs_list": [ 00:15:11.449 { 00:15:11.449 "name": "spare", 00:15:11.449 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:11.449 "is_configured": true, 00:15:11.449 "data_offset": 2048, 00:15:11.449 "data_size": 63488 00:15:11.449 }, 00:15:11.449 { 00:15:11.449 "name": null, 00:15:11.449 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.449 "is_configured": false, 00:15:11.449 "data_offset": 2048, 00:15:11.449 "data_size": 63488 00:15:11.449 }, 00:15:11.449 { 00:15:11.449 "name": "BaseBdev3", 00:15:11.449 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:11.449 "is_configured": true, 00:15:11.449 "data_offset": 2048, 00:15:11.449 "data_size": 63488 00:15:11.449 }, 00:15:11.449 { 00:15:11.449 "name": "BaseBdev4", 00:15:11.449 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:11.449 "is_configured": true, 00:15:11.449 "data_offset": 2048, 00:15:11.450 "data_size": 63488 00:15:11.450 } 00:15:11.450 ] 00:15:11.450 }' 00:15:11.450 12:07:18 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:11.450 12:07:18 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.450 12:07:18 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:11.450 12:07:18 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.450 12:07:18 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.384 12:07:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.384 [2024-07-25 12:07:19.634400] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:12.384 [2024-07-25 12:07:19.634442] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:12.384 [2024-07-25 12:07:19.634512] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:12.642 "name": "raid_bdev1", 00:15:12.642 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:12.642 "strip_size_kb": 0, 00:15:12.642 "state": "online", 00:15:12.642 "raid_level": "raid1", 00:15:12.642 "superblock": true, 00:15:12.642 "num_base_bdevs": 4, 00:15:12.642 "num_base_bdevs_discovered": 3, 00:15:12.642 "num_base_bdevs_operational": 3, 00:15:12.642 "base_bdevs_list": [ 00:15:12.642 { 00:15:12.642 "name": "spare", 00:15:12.642 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 2048, 00:15:12.642 "data_size": 63488 00:15:12.642 }, 00:15:12.642 { 00:15:12.642 "name": null, 00:15:12.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.642 "is_configured": false, 00:15:12.642 "data_offset": 2048, 00:15:12.642 "data_size": 63488 00:15:12.642 }, 00:15:12.642 { 00:15:12.642 "name": "BaseBdev3", 00:15:12.642 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 2048, 00:15:12.642 "data_size": 63488 00:15:12.642 }, 00:15:12.642 { 00:15:12.642 "name": "BaseBdev4", 00:15:12.642 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:12.642 "is_configured": true, 00:15:12.642 "data_offset": 2048, 00:15:12.642 "data_size": 63488 00:15:12.642 } 00:15:12.642 ] 00:15:12.642 }' 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@660 -- # break 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.642 12:07:19 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:12.900 "name": "raid_bdev1", 00:15:12.900 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:12.900 "strip_size_kb": 0, 00:15:12.900 "state": "online", 00:15:12.900 "raid_level": "raid1", 00:15:12.900 "superblock": true, 00:15:12.900 "num_base_bdevs": 4, 00:15:12.900 "num_base_bdevs_discovered": 3, 00:15:12.900 "num_base_bdevs_operational": 3, 00:15:12.900 "base_bdevs_list": [ 00:15:12.900 { 00:15:12.900 "name": "spare", 00:15:12.900 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:12.900 "is_configured": true, 00:15:12.900 "data_offset": 2048, 00:15:12.900 "data_size": 63488 00:15:12.900 }, 00:15:12.900 { 00:15:12.900 "name": null, 00:15:12.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:12.900 "is_configured": false, 00:15:12.900 "data_offset": 2048, 00:15:12.900 "data_size": 63488 00:15:12.900 }, 00:15:12.900 { 00:15:12.900 "name": "BaseBdev3", 00:15:12.900 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:12.900 "is_configured": true, 00:15:12.900 "data_offset": 2048, 00:15:12.900 "data_size": 63488 00:15:12.900 }, 00:15:12.900 { 00:15:12.900 "name": "BaseBdev4", 00:15:12.900 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:12.900 "is_configured": true, 00:15:12.900 "data_offset": 2048, 00:15:12.900 "data_size": 63488 00:15:12.900 } 00:15:12.900 ] 00:15:12.900 }' 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:12.900 12:07:20 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:12.901 12:07:20 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.159 12:07:20 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:13.159 "name": "raid_bdev1", 00:15:13.159 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:13.159 "strip_size_kb": 0, 00:15:13.159 "state": "online", 00:15:13.159 "raid_level": "raid1", 00:15:13.159 "superblock": true, 00:15:13.159 "num_base_bdevs": 4, 00:15:13.159 "num_base_bdevs_discovered": 3, 00:15:13.159 "num_base_bdevs_operational": 3, 00:15:13.159 "base_bdevs_list": [ 00:15:13.159 { 00:15:13.159 "name": "spare", 00:15:13.159 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:13.159 "is_configured": true, 00:15:13.159 "data_offset": 2048, 00:15:13.159 "data_size": 63488 00:15:13.159 }, 00:15:13.159 { 00:15:13.160 "name": null, 00:15:13.160 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:13.160 "is_configured": false, 00:15:13.160 "data_offset": 2048, 00:15:13.160 "data_size": 63488 00:15:13.160 }, 00:15:13.160 { 00:15:13.160 "name": "BaseBdev3", 00:15:13.160 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:13.160 "is_configured": true, 00:15:13.160 "data_offset": 2048, 00:15:13.160 "data_size": 63488 00:15:13.160 }, 00:15:13.160 { 00:15:13.160 "name": "BaseBdev4", 00:15:13.160 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:13.160 "is_configured": true, 00:15:13.160 "data_offset": 2048, 00:15:13.160 "data_size": 63488 00:15:13.160 } 00:15:13.160 ] 00:15:13.160 }' 00:15:13.160 12:07:20 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:13.160 12:07:20 -- common/autotest_common.sh@10 -- # set +x 00:15:13.418 12:07:20 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:13.683 [2024-07-25 12:07:20.885432] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.683 [2024-07-25 12:07:20.885452] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.683 [2024-07-25 12:07:20.885491] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.683 [2024-07-25 12:07:20.885537] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.683 [2024-07-25 12:07:20.885545] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x253b3c0 name raid_bdev1, state offline 00:15:13.683 12:07:20 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.683 12:07:20 -- bdev/bdev_raid.sh@671 -- # jq length 00:15:13.945 12:07:21 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:15:13.945 12:07:21 -- bdev/bdev_raid.sh@673 -- # '[' false = true ']' 00:15:13.945 12:07:21 -- bdev/bdev_raid.sh@687 -- # nbd_start_disks /var/tmp/spdk-raid.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@12 -- # local i 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:13.945 12:07:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:13.945 /dev/nbd0 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:14.203 12:07:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:14.203 12:07:21 -- common/autotest_common.sh@857 -- # local i 00:15:14.203 12:07:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:14.203 12:07:21 -- common/autotest_common.sh@861 -- # break 00:15:14.203 12:07:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.203 1+0 records in 00:15:14.203 1+0 records out 00:15:14.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226561 s, 18.1 MB/s 00:15:14.203 12:07:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:14.203 12:07:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:14.203 12:07:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:14.203 12:07:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:14.203 12:07:21 -- common/autotest_common.sh@877 -- # return 0 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd1 00:15:14.203 /dev/nbd1 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.203 12:07:21 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.203 12:07:21 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:14.203 12:07:21 -- common/autotest_common.sh@857 -- # local i 00:15:14.203 12:07:21 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:14.203 12:07:21 -- common/autotest_common.sh@861 -- # break 00:15:14.203 12:07:21 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:14.203 12:07:21 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.203 1+0 records in 00:15:14.203 1+0 records out 00:15:14.203 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000267244 s, 15.3 MB/s 00:15:14.203 12:07:21 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:14.203 12:07:21 -- common/autotest_common.sh@874 -- # size=4096 00:15:14.203 12:07:21 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:14.461 12:07:21 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:14.461 12:07:21 -- common/autotest_common.sh@877 -- # return 0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:14.461 12:07:21 -- bdev/bdev_raid.sh@688 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.461 12:07:21 -- bdev/bdev_raid.sh@689 -- # nbd_stop_disks /var/tmp/spdk-raid.sock '/dev/nbd0 /dev/nbd1' 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@51 -- # local i 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@41 -- # break 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.461 12:07:21 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@41 -- # break 00:15:14.719 12:07:21 -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.719 12:07:21 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:15:14.719 12:07:21 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:14.720 12:07:21 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:15:14.720 12:07:21 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:15:14.978 12:07:22 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:14.978 [2024-07-25 12:07:22.270177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:14.978 [2024-07-25 12:07:22.270214] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.978 [2024-07-25 12:07:22.270244] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2540c30 00:15:14.978 [2024-07-25 12:07:22.270253] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.978 [2024-07-25 12:07:22.271431] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.978 [2024-07-25 12:07:22.271455] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:14.978 [2024-07-25 12:07:22.271504] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:14.978 [2024-07-25 12:07:22.271523] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:14.978 BaseBdev1 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@696 -- # continue 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:15:15.237 12:07:22 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:15.496 [2024-07-25 12:07:22.607033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:15.496 [2024-07-25 12:07:22.607062] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.496 [2024-07-25 12:07:22.607079] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2540040 00:15:15.496 [2024-07-25 12:07:22.607087] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.496 [2024-07-25 12:07:22.607324] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.496 [2024-07-25 12:07:22.607336] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:15.496 [2024-07-25 12:07:22.607375] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:15:15.496 [2024-07-25 12:07:22.607383] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:15:15.496 [2024-07-25 12:07:22.607390] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:15.496 [2024-07-25 12:07:22.607401] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26e1360 name raid_bdev1, state configuring 00:15:15.496 [2024-07-25 12:07:22.607422] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:15.496 BaseBdev3 00:15:15.496 12:07:22 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:15.496 12:07:22 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:15:15.496 12:07:22 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:15:15.496 12:07:22 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:15.755 [2024-07-25 12:07:22.935991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:15.756 [2024-07-25 12:07:22.936022] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:15.756 [2024-07-25 12:07:22.936034] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x253c600 00:15:15.756 [2024-07-25 12:07:22.936042] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:15.756 [2024-07-25 12:07:22.936297] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:15.756 [2024-07-25 12:07:22.936310] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:15.756 [2024-07-25 12:07:22.936352] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:15:15.756 [2024-07-25 12:07:22.936366] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:15.756 BaseBdev4 00:15:15.756 12:07:22 -- bdev/bdev_raid.sh@701 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@702 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:16.015 [2024-07-25 12:07:23.264836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.015 [2024-07-25 12:07:23.264865] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.015 [2024-07-25 12:07:23.264880] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x253d000 00:15:16.015 [2024-07-25 12:07:23.264888] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.015 [2024-07-25 12:07:23.265164] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.015 [2024-07-25 12:07:23.265176] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.015 [2024-07-25 12:07:23.265231] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:15:16.015 [2024-07-25 12:07:23.265244] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.015 spare 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.015 12:07:23 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.283 [2024-07-25 12:07:23.365564] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x26e15e0 00:15:16.283 [2024-07-25 12:07:23.365582] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:16.283 [2024-07-25 12:07:23.365736] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2537da0 00:15:16.283 [2024-07-25 12:07:23.365861] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x26e15e0 00:15:16.283 [2024-07-25 12:07:23.365868] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x26e15e0 00:15:16.283 [2024-07-25 12:07:23.365954] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.283 12:07:23 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:16.283 "name": "raid_bdev1", 00:15:16.283 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:16.283 "strip_size_kb": 0, 00:15:16.283 "state": "online", 00:15:16.283 "raid_level": "raid1", 00:15:16.283 "superblock": true, 00:15:16.283 "num_base_bdevs": 4, 00:15:16.283 "num_base_bdevs_discovered": 3, 00:15:16.283 "num_base_bdevs_operational": 3, 00:15:16.283 "base_bdevs_list": [ 00:15:16.283 { 00:15:16.283 "name": "spare", 00:15:16.283 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:16.283 "is_configured": true, 00:15:16.283 "data_offset": 2048, 00:15:16.283 "data_size": 63488 00:15:16.283 }, 00:15:16.283 { 00:15:16.283 "name": null, 00:15:16.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.283 "is_configured": false, 00:15:16.283 "data_offset": 2048, 00:15:16.283 "data_size": 63488 00:15:16.283 }, 00:15:16.283 { 00:15:16.283 "name": "BaseBdev3", 00:15:16.284 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:16.284 "is_configured": true, 00:15:16.284 "data_offset": 2048, 00:15:16.284 "data_size": 63488 00:15:16.284 }, 00:15:16.284 { 00:15:16.284 "name": "BaseBdev4", 00:15:16.284 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:16.284 "is_configured": true, 00:15:16.284 "data_offset": 2048, 00:15:16.284 "data_size": 63488 00:15:16.284 } 00:15:16.284 ] 00:15:16.284 }' 00:15:16.284 12:07:23 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:16.284 12:07:23 -- common/autotest_common.sh@10 -- # set +x 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:16.851 12:07:23 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.851 12:07:24 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:16.851 "name": "raid_bdev1", 00:15:16.851 "uuid": "74fce9e8-cd08-4546-b509-907d66480fd8", 00:15:16.851 "strip_size_kb": 0, 00:15:16.851 "state": "online", 00:15:16.851 "raid_level": "raid1", 00:15:16.851 "superblock": true, 00:15:16.851 "num_base_bdevs": 4, 00:15:16.851 "num_base_bdevs_discovered": 3, 00:15:16.851 "num_base_bdevs_operational": 3, 00:15:16.851 "base_bdevs_list": [ 00:15:16.851 { 00:15:16.851 "name": "spare", 00:15:16.851 "uuid": "2eaac40f-efdb-51ab-b7ca-8247012f0f05", 00:15:16.851 "is_configured": true, 00:15:16.851 "data_offset": 2048, 00:15:16.851 "data_size": 63488 00:15:16.851 }, 00:15:16.851 { 00:15:16.851 "name": null, 00:15:16.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.851 "is_configured": false, 00:15:16.851 "data_offset": 2048, 00:15:16.851 "data_size": 63488 00:15:16.851 }, 00:15:16.851 { 00:15:16.851 "name": "BaseBdev3", 00:15:16.851 "uuid": "1d406a75-9286-58ee-bd57-5f3f8e0cafef", 00:15:16.851 "is_configured": true, 00:15:16.851 "data_offset": 2048, 00:15:16.851 "data_size": 63488 00:15:16.851 }, 00:15:16.851 { 00:15:16.851 "name": "BaseBdev4", 00:15:16.851 "uuid": "e468cd93-cfe9-507d-9c2c-56c3e29a2ef1", 00:15:16.851 "is_configured": true, 00:15:16.851 "data_offset": 2048, 00:15:16.851 "data_size": 63488 00:15:16.851 } 00:15:16.851 ] 00:15:16.851 }' 00:15:16.851 12:07:24 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@706 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.110 12:07:24 -- bdev/bdev_raid.sh@709 -- # killprocess 1267883 00:15:17.110 12:07:24 -- common/autotest_common.sh@926 -- # '[' -z 1267883 ']' 00:15:17.110 12:07:24 -- common/autotest_common.sh@930 -- # kill -0 1267883 00:15:17.110 12:07:24 -- common/autotest_common.sh@931 -- # uname 00:15:17.110 12:07:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:17.110 12:07:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1267883 00:15:17.110 12:07:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:17.110 12:07:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:17.110 12:07:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1267883' 00:15:17.110 killing process with pid 1267883 00:15:17.110 12:07:24 -- common/autotest_common.sh@945 -- # kill 1267883 00:15:17.110 Received shutdown signal, test time was about 60.000000 seconds 00:15:17.110 00:15:17.110 Latency(us) 00:15:17.110 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:17.110 =================================================================================================================== 00:15:17.110 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:17.110 [2024-07-25 12:07:24.415865] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:17.110 12:07:24 -- common/autotest_common.sh@950 -- # wait 1267883 00:15:17.110 [2024-07-25 12:07:24.415919] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:17.110 [2024-07-25 12:07:24.415969] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:17.110 [2024-07-25 12:07:24.415978] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x26e15e0 name raid_bdev1, state offline 00:15:17.369 [2024-07-25 12:07:24.458817] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:17.369 12:07:24 -- bdev/bdev_raid.sh@711 -- # return 0 00:15:17.369 00:15:17.369 real 0m20.581s 00:15:17.369 user 0m28.642s 00:15:17.369 sys 0m4.439s 00:15:17.369 12:07:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:17.369 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.369 ************************************ 00:15:17.369 END TEST raid_rebuild_test_sb 00:15:17.369 ************************************ 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@737 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true 00:15:17.628 12:07:24 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:15:17.628 12:07:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:17.628 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.628 ************************************ 00:15:17.628 START TEST raid_rebuild_test_io 00:15:17.628 ************************************ 00:15:17.628 12:07:24 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 false true 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@519 -- # local superblock=false 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@539 -- # '[' false = true ']' 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@544 -- # raid_pid=1270886 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1270886 /var/tmp/spdk-raid.sock 00:15:17.628 12:07:24 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:17.628 12:07:24 -- common/autotest_common.sh@819 -- # '[' -z 1270886 ']' 00:15:17.628 12:07:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:17.628 12:07:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:17.628 12:07:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:17.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:17.628 12:07:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:17.628 12:07:24 -- common/autotest_common.sh@10 -- # set +x 00:15:17.628 [2024-07-25 12:07:24.775125] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:17.628 [2024-07-25 12:07:24.775183] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1270886 ] 00:15:17.628 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.628 Zero copy mechanism will not be used. 00:15:17.628 [2024-07-25 12:07:24.863027] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.887 [2024-07-25 12:07:24.944651] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.887 [2024-07-25 12:07:25.003578] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.887 [2024-07-25 12:07:25.003606] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:18.585 12:07:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:18.585 12:07:25 -- common/autotest_common.sh@852 -- # return 0 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:18.585 BaseBdev1 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:15:18.585 12:07:25 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:18.852 BaseBdev2 00:15:18.852 12:07:25 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:18.852 12:07:25 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:15:18.852 12:07:25 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:18.852 BaseBdev3 00:15:18.852 12:07:26 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:18.852 12:07:26 -- bdev/bdev_raid.sh@549 -- # '[' false = true ']' 00:15:18.852 12:07:26 -- bdev/bdev_raid.sh@553 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:19.110 BaseBdev4 00:15:19.110 12:07:26 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:15:19.110 spare_malloc 00:15:19.369 12:07:26 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:19.369 spare_delay 00:15:19.369 12:07:26 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:19.627 [2024-07-25 12:07:26.726387] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:19.627 [2024-07-25 12:07:26.726428] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.627 [2024-07-25 12:07:26.726443] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x25d00a0 00:15:19.627 [2024-07-25 12:07:26.726452] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.627 [2024-07-25 12:07:26.727648] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.627 [2024-07-25 12:07:26.727672] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:19.627 spare 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:15:19.627 [2024-07-25 12:07:26.890847] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:19.627 [2024-07-25 12:07:26.891840] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:19.627 [2024-07-25 12:07:26.891869] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:19.627 [2024-07-25 12:07:26.891891] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:19.627 [2024-07-25 12:07:26.891941] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x2698940 00:15:19.627 [2024-07-25 12:07:26.891948] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:19.627 [2024-07-25 12:07:26.892117] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25c8ff0 00:15:19.627 [2024-07-25 12:07:26.892213] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2698940 00:15:19.627 [2024-07-25 12:07:26.892219] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2698940 00:15:19.627 [2024-07-25 12:07:26.892312] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:19.627 12:07:26 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.886 12:07:27 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:19.886 "name": "raid_bdev1", 00:15:19.886 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:19.886 "strip_size_kb": 0, 00:15:19.886 "state": "online", 00:15:19.886 "raid_level": "raid1", 00:15:19.886 "superblock": false, 00:15:19.886 "num_base_bdevs": 4, 00:15:19.886 "num_base_bdevs_discovered": 4, 00:15:19.886 "num_base_bdevs_operational": 4, 00:15:19.886 "base_bdevs_list": [ 00:15:19.886 { 00:15:19.886 "name": "BaseBdev1", 00:15:19.886 "uuid": "53ebe206-a4ce-4335-81d3-9c79ee8a325d", 00:15:19.886 "is_configured": true, 00:15:19.886 "data_offset": 0, 00:15:19.886 "data_size": 65536 00:15:19.886 }, 00:15:19.886 { 00:15:19.886 "name": "BaseBdev2", 00:15:19.886 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:19.886 "is_configured": true, 00:15:19.886 "data_offset": 0, 00:15:19.886 "data_size": 65536 00:15:19.886 }, 00:15:19.886 { 00:15:19.886 "name": "BaseBdev3", 00:15:19.886 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:19.886 "is_configured": true, 00:15:19.886 "data_offset": 0, 00:15:19.886 "data_size": 65536 00:15:19.886 }, 00:15:19.886 { 00:15:19.886 "name": "BaseBdev4", 00:15:19.886 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:19.886 "is_configured": true, 00:15:19.886 "data_offset": 0, 00:15:19.886 "data_size": 65536 00:15:19.886 } 00:15:19.886 ] 00:15:19.886 }' 00:15:19.886 12:07:27 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:19.886 12:07:27 -- common/autotest_common.sh@10 -- # set +x 00:15:20.451 12:07:27 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:15:20.451 12:07:27 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:20.451 [2024-07-25 12:07:27.668949] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:20.451 12:07:27 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=65536 00:15:20.451 12:07:27 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.452 12:07:27 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:20.715 12:07:27 -- bdev/bdev_raid.sh@570 -- # data_offset=0 00:15:20.715 12:07:27 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:15:20.715 12:07:27 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:15:20.716 12:07:27 -- bdev/bdev_raid.sh@574 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:20.716 [2024-07-25 12:07:27.927546] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25cb430 00:15:20.716 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:20.716 Zero copy mechanism will not be used. 00:15:20.716 Running I/O for 60 seconds... 00:15:20.716 [2024-07-25 12:07:28.013242] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:20.716 [2024-07-25 12:07:28.013416] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x25cb430 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:20.974 "name": "raid_bdev1", 00:15:20.974 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:20.974 "strip_size_kb": 0, 00:15:20.974 "state": "online", 00:15:20.974 "raid_level": "raid1", 00:15:20.974 "superblock": false, 00:15:20.974 "num_base_bdevs": 4, 00:15:20.974 "num_base_bdevs_discovered": 3, 00:15:20.974 "num_base_bdevs_operational": 3, 00:15:20.974 "base_bdevs_list": [ 00:15:20.974 { 00:15:20.974 "name": null, 00:15:20.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.974 "is_configured": false, 00:15:20.974 "data_offset": 0, 00:15:20.974 "data_size": 65536 00:15:20.974 }, 00:15:20.974 { 00:15:20.974 "name": "BaseBdev2", 00:15:20.974 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:20.974 "is_configured": true, 00:15:20.974 "data_offset": 0, 00:15:20.974 "data_size": 65536 00:15:20.974 }, 00:15:20.974 { 00:15:20.974 "name": "BaseBdev3", 00:15:20.974 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:20.974 "is_configured": true, 00:15:20.974 "data_offset": 0, 00:15:20.974 "data_size": 65536 00:15:20.974 }, 00:15:20.974 { 00:15:20.974 "name": "BaseBdev4", 00:15:20.974 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:20.974 "is_configured": true, 00:15:20.974 "data_offset": 0, 00:15:20.974 "data_size": 65536 00:15:20.974 } 00:15:20.974 ] 00:15:20.974 }' 00:15:20.974 12:07:28 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:20.974 12:07:28 -- common/autotest_common.sh@10 -- # set +x 00:15:21.540 12:07:28 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:21.799 [2024-07-25 12:07:28.889564] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:21.799 [2024-07-25 12:07:28.889607] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:21.799 12:07:28 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:15:21.799 [2024-07-25 12:07:28.947727] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2779d30 00:15:21.799 [2024-07-25 12:07:28.949619] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:21.799 [2024-07-25 12:07:29.064024] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:21.799 [2024-07-25 12:07:29.065200] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:22.058 [2024-07-25 12:07:29.287009] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:22.058 [2024-07-25 12:07:29.287679] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:22.625 [2024-07-25 12:07:29.721273] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:22.625 [2024-07-25 12:07:29.721545] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:22.625 12:07:29 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.884 12:07:29 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.884 12:07:30 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:22.884 "name": "raid_bdev1", 00:15:22.884 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:22.884 "strip_size_kb": 0, 00:15:22.884 "state": "online", 00:15:22.884 "raid_level": "raid1", 00:15:22.884 "superblock": false, 00:15:22.884 "num_base_bdevs": 4, 00:15:22.884 "num_base_bdevs_discovered": 4, 00:15:22.884 "num_base_bdevs_operational": 4, 00:15:22.884 "process": { 00:15:22.884 "type": "rebuild", 00:15:22.884 "target": "spare", 00:15:22.884 "progress": { 00:15:22.884 "blocks": 14336, 00:15:22.884 "percent": 21 00:15:22.884 } 00:15:22.884 }, 00:15:22.884 "base_bdevs_list": [ 00:15:22.884 { 00:15:22.884 "name": "spare", 00:15:22.884 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:22.884 "is_configured": true, 00:15:22.884 "data_offset": 0, 00:15:22.884 "data_size": 65536 00:15:22.884 }, 00:15:22.884 { 00:15:22.884 "name": "BaseBdev2", 00:15:22.884 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:22.884 "is_configured": true, 00:15:22.884 "data_offset": 0, 00:15:22.884 "data_size": 65536 00:15:22.884 }, 00:15:22.884 { 00:15:22.884 "name": "BaseBdev3", 00:15:22.884 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:22.884 "is_configured": true, 00:15:22.884 "data_offset": 0, 00:15:22.884 "data_size": 65536 00:15:22.884 }, 00:15:22.884 { 00:15:22.884 "name": "BaseBdev4", 00:15:22.884 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:22.884 "is_configured": true, 00:15:22.884 "data_offset": 0, 00:15:22.884 "data_size": 65536 00:15:22.884 } 00:15:22.884 ] 00:15:22.884 }' 00:15:22.884 12:07:30 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:22.884 12:07:30 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.884 12:07:30 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:23.143 12:07:30 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.143 12:07:30 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:15:23.143 [2024-07-25 12:07:30.345417] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:23.402 [2024-07-25 12:07:30.458838] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:23.402 [2024-07-25 12:07:30.468429] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:23.402 [2024-07-25 12:07:30.493037] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x25cb430 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.402 12:07:30 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:23.402 "name": "raid_bdev1", 00:15:23.402 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:23.402 "strip_size_kb": 0, 00:15:23.402 "state": "online", 00:15:23.402 "raid_level": "raid1", 00:15:23.402 "superblock": false, 00:15:23.402 "num_base_bdevs": 4, 00:15:23.402 "num_base_bdevs_discovered": 3, 00:15:23.403 "num_base_bdevs_operational": 3, 00:15:23.403 "base_bdevs_list": [ 00:15:23.403 { 00:15:23.403 "name": null, 00:15:23.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.403 "is_configured": false, 00:15:23.403 "data_offset": 0, 00:15:23.403 "data_size": 65536 00:15:23.403 }, 00:15:23.403 { 00:15:23.403 "name": "BaseBdev2", 00:15:23.403 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:23.403 "is_configured": true, 00:15:23.403 "data_offset": 0, 00:15:23.403 "data_size": 65536 00:15:23.403 }, 00:15:23.403 { 00:15:23.403 "name": "BaseBdev3", 00:15:23.403 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:23.403 "is_configured": true, 00:15:23.403 "data_offset": 0, 00:15:23.403 "data_size": 65536 00:15:23.403 }, 00:15:23.403 { 00:15:23.403 "name": "BaseBdev4", 00:15:23.403 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:23.403 "is_configured": true, 00:15:23.403 "data_offset": 0, 00:15:23.403 "data_size": 65536 00:15:23.403 } 00:15:23.403 ] 00:15:23.403 }' 00:15:23.403 12:07:30 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:23.403 12:07:30 -- common/autotest_common.sh@10 -- # set +x 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.969 12:07:31 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:24.226 "name": "raid_bdev1", 00:15:24.226 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:24.226 "strip_size_kb": 0, 00:15:24.226 "state": "online", 00:15:24.226 "raid_level": "raid1", 00:15:24.226 "superblock": false, 00:15:24.226 "num_base_bdevs": 4, 00:15:24.226 "num_base_bdevs_discovered": 3, 00:15:24.226 "num_base_bdevs_operational": 3, 00:15:24.226 "base_bdevs_list": [ 00:15:24.226 { 00:15:24.226 "name": null, 00:15:24.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.226 "is_configured": false, 00:15:24.226 "data_offset": 0, 00:15:24.226 "data_size": 65536 00:15:24.226 }, 00:15:24.226 { 00:15:24.226 "name": "BaseBdev2", 00:15:24.226 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:24.226 "is_configured": true, 00:15:24.226 "data_offset": 0, 00:15:24.226 "data_size": 65536 00:15:24.226 }, 00:15:24.226 { 00:15:24.226 "name": "BaseBdev3", 00:15:24.226 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:24.226 "is_configured": true, 00:15:24.226 "data_offset": 0, 00:15:24.226 "data_size": 65536 00:15:24.226 }, 00:15:24.226 { 00:15:24.226 "name": "BaseBdev4", 00:15:24.226 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:24.226 "is_configured": true, 00:15:24.226 "data_offset": 0, 00:15:24.226 "data_size": 65536 00:15:24.226 } 00:15:24.226 ] 00:15:24.226 }' 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:24.226 12:07:31 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:24.483 [2024-07-25 12:07:31.562312] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:24.483 [2024-07-25 12:07:31.562352] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:24.483 12:07:31 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:15:24.483 [2024-07-25 12:07:31.605598] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x25e1b30 00:15:24.483 [2024-07-25 12:07:31.606724] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:24.483 [2024-07-25 12:07:31.725793] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.483 [2024-07-25 12:07:31.727029] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:24.741 [2024-07-25 12:07:31.943306] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:24.741 [2024-07-25 12:07:31.943567] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:24.999 [2024-07-25 12:07:32.263721] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.564 12:07:32 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:25.564 "name": "raid_bdev1", 00:15:25.564 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:25.564 "strip_size_kb": 0, 00:15:25.564 "state": "online", 00:15:25.564 "raid_level": "raid1", 00:15:25.564 "superblock": false, 00:15:25.564 "num_base_bdevs": 4, 00:15:25.564 "num_base_bdevs_discovered": 4, 00:15:25.564 "num_base_bdevs_operational": 4, 00:15:25.564 "process": { 00:15:25.564 "type": "rebuild", 00:15:25.564 "target": "spare", 00:15:25.564 "progress": { 00:15:25.564 "blocks": 14336, 00:15:25.564 "percent": 21 00:15:25.564 } 00:15:25.564 }, 00:15:25.564 "base_bdevs_list": [ 00:15:25.565 { 00:15:25.565 "name": "spare", 00:15:25.565 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:25.565 "is_configured": true, 00:15:25.565 "data_offset": 0, 00:15:25.565 "data_size": 65536 00:15:25.565 }, 00:15:25.565 { 00:15:25.565 "name": "BaseBdev2", 00:15:25.565 "uuid": "05e4a825-5e02-4e02-8711-f8c83560110c", 00:15:25.565 "is_configured": true, 00:15:25.565 "data_offset": 0, 00:15:25.565 "data_size": 65536 00:15:25.565 }, 00:15:25.565 { 00:15:25.565 "name": "BaseBdev3", 00:15:25.565 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:25.565 "is_configured": true, 00:15:25.565 "data_offset": 0, 00:15:25.565 "data_size": 65536 00:15:25.565 }, 00:15:25.565 { 00:15:25.565 "name": "BaseBdev4", 00:15:25.565 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:25.565 "is_configured": true, 00:15:25.565 "data_offset": 0, 00:15:25.565 "data_size": 65536 00:15:25.565 } 00:15:25.565 ] 00:15:25.565 }' 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:25.565 [2024-07-25 12:07:32.816495] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@617 -- # '[' false = true ']' 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:15:25.565 12:07:32 -- bdev/bdev_raid.sh@646 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:25.823 [2024-07-25 12:07:33.026458] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:25.823 [2024-07-25 12:07:33.087956] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x25cb430 00:15:25.823 [2024-07-25 12:07:33.087976] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x25e1b30 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:25.823 12:07:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:26.081 "name": "raid_bdev1", 00:15:26.081 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:26.081 "strip_size_kb": 0, 00:15:26.081 "state": "online", 00:15:26.081 "raid_level": "raid1", 00:15:26.081 "superblock": false, 00:15:26.081 "num_base_bdevs": 4, 00:15:26.081 "num_base_bdevs_discovered": 3, 00:15:26.081 "num_base_bdevs_operational": 3, 00:15:26.081 "process": { 00:15:26.081 "type": "rebuild", 00:15:26.081 "target": "spare", 00:15:26.081 "progress": { 00:15:26.081 "blocks": 22528, 00:15:26.081 "percent": 34 00:15:26.081 } 00:15:26.081 }, 00:15:26.081 "base_bdevs_list": [ 00:15:26.081 { 00:15:26.081 "name": "spare", 00:15:26.081 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:26.081 "is_configured": true, 00:15:26.081 "data_offset": 0, 00:15:26.081 "data_size": 65536 00:15:26.081 }, 00:15:26.081 { 00:15:26.081 "name": null, 00:15:26.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.081 "is_configured": false, 00:15:26.081 "data_offset": 0, 00:15:26.081 "data_size": 65536 00:15:26.081 }, 00:15:26.081 { 00:15:26.081 "name": "BaseBdev3", 00:15:26.081 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:26.081 "is_configured": true, 00:15:26.081 "data_offset": 0, 00:15:26.081 "data_size": 65536 00:15:26.081 }, 00:15:26.081 { 00:15:26.081 "name": "BaseBdev4", 00:15:26.081 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:26.081 "is_configured": true, 00:15:26.081 "data_offset": 0, 00:15:26.081 "data_size": 65536 00:15:26.081 } 00:15:26.081 ] 00:15:26.081 }' 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@657 -- # local timeout=392 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.081 12:07:33 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.339 [2024-07-25 12:07:33.528138] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:26.339 "name": "raid_bdev1", 00:15:26.339 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:26.339 "strip_size_kb": 0, 00:15:26.339 "state": "online", 00:15:26.339 "raid_level": "raid1", 00:15:26.339 "superblock": false, 00:15:26.339 "num_base_bdevs": 4, 00:15:26.339 "num_base_bdevs_discovered": 3, 00:15:26.339 "num_base_bdevs_operational": 3, 00:15:26.339 "process": { 00:15:26.339 "type": "rebuild", 00:15:26.339 "target": "spare", 00:15:26.339 "progress": { 00:15:26.339 "blocks": 26624, 00:15:26.339 "percent": 40 00:15:26.339 } 00:15:26.339 }, 00:15:26.339 "base_bdevs_list": [ 00:15:26.339 { 00:15:26.339 "name": "spare", 00:15:26.339 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:26.339 "is_configured": true, 00:15:26.339 "data_offset": 0, 00:15:26.339 "data_size": 65536 00:15:26.339 }, 00:15:26.339 { 00:15:26.339 "name": null, 00:15:26.339 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.339 "is_configured": false, 00:15:26.339 "data_offset": 0, 00:15:26.339 "data_size": 65536 00:15:26.339 }, 00:15:26.339 { 00:15:26.339 "name": "BaseBdev3", 00:15:26.339 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:26.339 "is_configured": true, 00:15:26.339 "data_offset": 0, 00:15:26.339 "data_size": 65536 00:15:26.339 }, 00:15:26.339 { 00:15:26.339 "name": "BaseBdev4", 00:15:26.339 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:26.339 "is_configured": true, 00:15:26.339 "data_offset": 0, 00:15:26.339 "data_size": 65536 00:15:26.339 } 00:15:26.339 ] 00:15:26.339 }' 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:26.339 12:07:33 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:26.613 [2024-07-25 12:07:33.863630] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:27.179 [2024-07-25 12:07:34.301804] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.436 12:07:34 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:27.694 "name": "raid_bdev1", 00:15:27.694 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:27.694 "strip_size_kb": 0, 00:15:27.694 "state": "online", 00:15:27.694 "raid_level": "raid1", 00:15:27.694 "superblock": false, 00:15:27.694 "num_base_bdevs": 4, 00:15:27.694 "num_base_bdevs_discovered": 3, 00:15:27.694 "num_base_bdevs_operational": 3, 00:15:27.694 "process": { 00:15:27.694 "type": "rebuild", 00:15:27.694 "target": "spare", 00:15:27.694 "progress": { 00:15:27.694 "blocks": 47104, 00:15:27.694 "percent": 71 00:15:27.694 } 00:15:27.694 }, 00:15:27.694 "base_bdevs_list": [ 00:15:27.694 { 00:15:27.694 "name": "spare", 00:15:27.694 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:27.694 "is_configured": true, 00:15:27.694 "data_offset": 0, 00:15:27.694 "data_size": 65536 00:15:27.694 }, 00:15:27.694 { 00:15:27.694 "name": null, 00:15:27.694 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.694 "is_configured": false, 00:15:27.694 "data_offset": 0, 00:15:27.694 "data_size": 65536 00:15:27.694 }, 00:15:27.694 { 00:15:27.694 "name": "BaseBdev3", 00:15:27.694 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:27.694 "is_configured": true, 00:15:27.694 "data_offset": 0, 00:15:27.694 "data_size": 65536 00:15:27.694 }, 00:15:27.694 { 00:15:27.694 "name": "BaseBdev4", 00:15:27.694 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:27.694 "is_configured": true, 00:15:27.694 "data_offset": 0, 00:15:27.694 "data_size": 65536 00:15:27.694 } 00:15:27.694 ] 00:15:27.694 }' 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:27.694 12:07:34 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:27.694 [2024-07-25 12:07:34.958264] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:27.952 [2024-07-25 12:07:35.176705] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.886 12:07:35 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.886 [2024-07-25 12:07:35.955107] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:28.886 [2024-07-25 12:07:36.060706] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:28.886 [2024-07-25 12:07:36.063836] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:28.886 "name": "raid_bdev1", 00:15:28.886 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:28.886 "strip_size_kb": 0, 00:15:28.886 "state": "online", 00:15:28.886 "raid_level": "raid1", 00:15:28.886 "superblock": false, 00:15:28.886 "num_base_bdevs": 4, 00:15:28.886 "num_base_bdevs_discovered": 3, 00:15:28.886 "num_base_bdevs_operational": 3, 00:15:28.886 "process": { 00:15:28.886 "type": "rebuild", 00:15:28.886 "target": "spare", 00:15:28.886 "progress": { 00:15:28.886 "blocks": 65536, 00:15:28.886 "percent": 100 00:15:28.886 } 00:15:28.886 }, 00:15:28.886 "base_bdevs_list": [ 00:15:28.886 { 00:15:28.886 "name": "spare", 00:15:28.886 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:28.886 "is_configured": true, 00:15:28.886 "data_offset": 0, 00:15:28.886 "data_size": 65536 00:15:28.886 }, 00:15:28.886 { 00:15:28.886 "name": null, 00:15:28.886 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.886 "is_configured": false, 00:15:28.886 "data_offset": 0, 00:15:28.886 "data_size": 65536 00:15:28.886 }, 00:15:28.886 { 00:15:28.886 "name": "BaseBdev3", 00:15:28.886 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:28.886 "is_configured": true, 00:15:28.886 "data_offset": 0, 00:15:28.886 "data_size": 65536 00:15:28.886 }, 00:15:28.886 { 00:15:28.886 "name": "BaseBdev4", 00:15:28.886 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:28.886 "is_configured": true, 00:15:28.886 "data_offset": 0, 00:15:28.886 "data_size": 65536 00:15:28.886 } 00:15:28.886 ] 00:15:28.886 }' 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:28.886 12:07:36 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:30.257 "name": "raid_bdev1", 00:15:30.257 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:30.257 "strip_size_kb": 0, 00:15:30.257 "state": "online", 00:15:30.257 "raid_level": "raid1", 00:15:30.257 "superblock": false, 00:15:30.257 "num_base_bdevs": 4, 00:15:30.257 "num_base_bdevs_discovered": 3, 00:15:30.257 "num_base_bdevs_operational": 3, 00:15:30.257 "base_bdevs_list": [ 00:15:30.257 { 00:15:30.257 "name": "spare", 00:15:30.257 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:30.257 "is_configured": true, 00:15:30.257 "data_offset": 0, 00:15:30.257 "data_size": 65536 00:15:30.257 }, 00:15:30.257 { 00:15:30.257 "name": null, 00:15:30.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.257 "is_configured": false, 00:15:30.257 "data_offset": 0, 00:15:30.257 "data_size": 65536 00:15:30.257 }, 00:15:30.257 { 00:15:30.257 "name": "BaseBdev3", 00:15:30.257 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:30.257 "is_configured": true, 00:15:30.257 "data_offset": 0, 00:15:30.257 "data_size": 65536 00:15:30.257 }, 00:15:30.257 { 00:15:30.257 "name": "BaseBdev4", 00:15:30.257 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:30.257 "is_configured": true, 00:15:30.257 "data_offset": 0, 00:15:30.257 "data_size": 65536 00:15:30.257 } 00:15:30.257 ] 00:15:30.257 }' 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@660 -- # break 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.257 12:07:37 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:30.515 "name": "raid_bdev1", 00:15:30.515 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:30.515 "strip_size_kb": 0, 00:15:30.515 "state": "online", 00:15:30.515 "raid_level": "raid1", 00:15:30.515 "superblock": false, 00:15:30.515 "num_base_bdevs": 4, 00:15:30.515 "num_base_bdevs_discovered": 3, 00:15:30.515 "num_base_bdevs_operational": 3, 00:15:30.515 "base_bdevs_list": [ 00:15:30.515 { 00:15:30.515 "name": "spare", 00:15:30.515 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:30.515 "is_configured": true, 00:15:30.515 "data_offset": 0, 00:15:30.515 "data_size": 65536 00:15:30.515 }, 00:15:30.515 { 00:15:30.515 "name": null, 00:15:30.515 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.515 "is_configured": false, 00:15:30.515 "data_offset": 0, 00:15:30.515 "data_size": 65536 00:15:30.515 }, 00:15:30.515 { 00:15:30.515 "name": "BaseBdev3", 00:15:30.515 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:30.515 "is_configured": true, 00:15:30.515 "data_offset": 0, 00:15:30.515 "data_size": 65536 00:15:30.515 }, 00:15:30.515 { 00:15:30.515 "name": "BaseBdev4", 00:15:30.515 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:30.515 "is_configured": true, 00:15:30.515 "data_offset": 0, 00:15:30.515 "data_size": 65536 00:15:30.515 } 00:15:30.515 ] 00:15:30.515 }' 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:30.515 12:07:37 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.773 12:07:37 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:30.773 "name": "raid_bdev1", 00:15:30.773 "uuid": "d8acbe84-9b91-497a-95d8-34f590dbf0b8", 00:15:30.773 "strip_size_kb": 0, 00:15:30.773 "state": "online", 00:15:30.773 "raid_level": "raid1", 00:15:30.773 "superblock": false, 00:15:30.773 "num_base_bdevs": 4, 00:15:30.773 "num_base_bdevs_discovered": 3, 00:15:30.773 "num_base_bdevs_operational": 3, 00:15:30.773 "base_bdevs_list": [ 00:15:30.773 { 00:15:30.773 "name": "spare", 00:15:30.773 "uuid": "bf6c897f-c212-5c61-802e-a476ab165604", 00:15:30.773 "is_configured": true, 00:15:30.773 "data_offset": 0, 00:15:30.773 "data_size": 65536 00:15:30.773 }, 00:15:30.773 { 00:15:30.773 "name": null, 00:15:30.773 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.773 "is_configured": false, 00:15:30.773 "data_offset": 0, 00:15:30.773 "data_size": 65536 00:15:30.773 }, 00:15:30.773 { 00:15:30.773 "name": "BaseBdev3", 00:15:30.773 "uuid": "b774e31a-523a-4565-90e6-915bccb2e300", 00:15:30.773 "is_configured": true, 00:15:30.773 "data_offset": 0, 00:15:30.773 "data_size": 65536 00:15:30.773 }, 00:15:30.773 { 00:15:30.773 "name": "BaseBdev4", 00:15:30.773 "uuid": "64c94287-4f77-4445-b5d7-98ae3b666913", 00:15:30.773 "is_configured": true, 00:15:30.773 "data_offset": 0, 00:15:30.773 "data_size": 65536 00:15:30.773 } 00:15:30.773 ] 00:15:30.773 }' 00:15:30.773 12:07:37 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:30.773 12:07:37 -- common/autotest_common.sh@10 -- # set +x 00:15:31.338 12:07:38 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:31.338 [2024-07-25 12:07:38.510333] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:31.338 [2024-07-25 12:07:38.510362] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:31.338 00:15:31.338 Latency(us) 00:15:31.338 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.338 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:31.338 raid_bdev1 : 10.59 107.68 323.03 0.00 0.00 13433.43 227.95 115343.36 00:15:31.338 =================================================================================================================== 00:15:31.338 Total : 107.68 323.03 0.00 0.00 13433.43 227.95 115343.36 00:15:31.338 [2024-07-25 12:07:38.545434] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:31.338 [2024-07-25 12:07:38.545457] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:31.338 [2024-07-25 12:07:38.545511] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:31.338 [2024-07-25 12:07:38.545519] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2698940 name raid_bdev1, state offline 00:15:31.338 0 00:15:31.338 12:07:38 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.338 12:07:38 -- bdev/bdev_raid.sh@671 -- # jq length 00:15:31.595 12:07:38 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:15:31.595 12:07:38 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:15:31.595 12:07:38 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@12 -- # local i 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.595 12:07:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:15:31.595 /dev/nbd0 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:31.853 12:07:38 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:31.853 12:07:38 -- common/autotest_common.sh@857 -- # local i 00:15:31.853 12:07:38 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:31.853 12:07:38 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:31.853 12:07:38 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:31.853 12:07:38 -- common/autotest_common.sh@861 -- # break 00:15:31.853 12:07:38 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:31.853 12:07:38 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:31.853 12:07:38 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.853 1+0 records in 00:15:31.853 1+0 records out 00:15:31.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000260198 s, 15.7 MB/s 00:15:31.853 12:07:38 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:31.853 12:07:38 -- common/autotest_common.sh@874 -- # size=4096 00:15:31.853 12:07:38 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:31.853 12:07:38 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:31.853 12:07:38 -- common/autotest_common.sh@877 -- # return 0 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@678 -- # continue 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:15:31.853 12:07:38 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@12 -- # local i 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.853 12:07:38 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:31.853 /dev/nbd1 00:15:31.853 12:07:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:31.853 12:07:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:31.853 12:07:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:31.853 12:07:39 -- common/autotest_common.sh@857 -- # local i 00:15:31.853 12:07:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:31.853 12:07:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:31.853 12:07:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:31.853 12:07:39 -- common/autotest_common.sh@861 -- # break 00:15:31.853 12:07:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:31.853 12:07:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:31.853 12:07:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:31.853 1+0 records in 00:15:31.853 1+0 records out 00:15:31.853 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277321 s, 14.8 MB/s 00:15:31.853 12:07:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:31.853 12:07:39 -- common/autotest_common.sh@874 -- # size=4096 00:15:31.853 12:07:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:31.853 12:07:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:31.853 12:07:39 -- common/autotest_common.sh@877 -- # return 0 00:15:31.853 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:31.853 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:31.853 12:07:39 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:32.110 12:07:39 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@51 -- # local i 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.110 12:07:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@41 -- # break 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.367 12:07:39 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:32.367 12:07:39 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:15:32.367 12:07:39 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@12 -- # local i 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:32.367 /dev/nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:32.367 12:07:39 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:32.367 12:07:39 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:32.367 12:07:39 -- common/autotest_common.sh@857 -- # local i 00:15:32.367 12:07:39 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:32.367 12:07:39 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:32.367 12:07:39 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:32.367 12:07:39 -- common/autotest_common.sh@861 -- # break 00:15:32.367 12:07:39 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:32.368 12:07:39 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:32.368 12:07:39 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.368 1+0 records in 00:15:32.368 1+0 records out 00:15:32.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000128379 s, 31.9 MB/s 00:15:32.368 12:07:39 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:32.368 12:07:39 -- common/autotest_common.sh@874 -- # size=4096 00:15:32.368 12:07:39 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:32.368 12:07:39 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:32.368 12:07:39 -- common/autotest_common.sh@877 -- # return 0 00:15:32.368 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:32.368 12:07:39 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:32.368 12:07:39 -- bdev/bdev_raid.sh@681 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:32.625 12:07:39 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@51 -- # local i 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@41 -- # break 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.625 12:07:39 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@51 -- # local i 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:32.625 12:07:39 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@41 -- # break 00:15:32.882 12:07:40 -- bdev/nbd_common.sh@45 -- # return 0 00:15:32.882 12:07:40 -- bdev/bdev_raid.sh@692 -- # '[' false = true ']' 00:15:32.882 12:07:40 -- bdev/bdev_raid.sh@709 -- # killprocess 1270886 00:15:32.882 12:07:40 -- common/autotest_common.sh@926 -- # '[' -z 1270886 ']' 00:15:32.882 12:07:40 -- common/autotest_common.sh@930 -- # kill -0 1270886 00:15:32.882 12:07:40 -- common/autotest_common.sh@931 -- # uname 00:15:32.882 12:07:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:32.882 12:07:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1270886 00:15:32.882 12:07:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:32.882 12:07:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:32.882 12:07:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1270886' 00:15:32.882 killing process with pid 1270886 00:15:32.882 12:07:40 -- common/autotest_common.sh@945 -- # kill 1270886 00:15:32.882 Received shutdown signal, test time was about 12.160141 seconds 00:15:32.882 00:15:32.882 Latency(us) 00:15:32.882 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:32.882 =================================================================================================================== 00:15:32.882 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:32.882 [2024-07-25 12:07:40.119176] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:32.882 12:07:40 -- common/autotest_common.sh@950 -- # wait 1270886 00:15:32.882 [2024-07-25 12:07:40.156946] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@711 -- # return 0 00:15:33.140 00:15:33.140 real 0m15.672s 00:15:33.140 user 0m22.880s 00:15:33.140 sys 0m2.766s 00:15:33.140 12:07:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:33.140 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.140 ************************************ 00:15:33.140 END TEST raid_rebuild_test_io 00:15:33.140 ************************************ 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@738 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true 00:15:33.140 12:07:40 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:15:33.140 12:07:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:33.140 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.140 ************************************ 00:15:33.140 START TEST raid_rebuild_test_sb_io 00:15:33.140 ************************************ 00:15:33.140 12:07:40 -- common/autotest_common.sh@1104 -- # raid_rebuild_test raid1 4 true true 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@517 -- # local raid_level=raid1 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@518 -- # local num_base_bdevs=4 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@519 -- # local superblock=true 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@520 -- # local background_io=true 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i = 1 )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev1 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev2 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev3 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@523 -- # echo BaseBdev4 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i++ )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # (( i <= num_base_bdevs )) 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@521 -- # local base_bdevs 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@522 -- # local raid_bdev_name=raid_bdev1 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@523 -- # local strip_size 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@524 -- # local create_arg 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@525 -- # local raid_bdev_size 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@526 -- # local data_offset 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@528 -- # '[' raid1 '!=' raid1 ']' 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@536 -- # strip_size=0 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@539 -- # '[' true = true ']' 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@540 -- # create_arg+=' -s' 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@544 -- # raid_pid=1273303 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@545 -- # waitforlisten 1273303 /var/tmp/spdk-raid.sock 00:15:33.140 12:07:40 -- bdev/bdev_raid.sh@543 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:33.140 12:07:40 -- common/autotest_common.sh@819 -- # '[' -z 1273303 ']' 00:15:33.140 12:07:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:33.140 12:07:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:33.140 12:07:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:33.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:33.140 12:07:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:33.140 12:07:40 -- common/autotest_common.sh@10 -- # set +x 00:15:33.398 [2024-07-25 12:07:40.492848] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:33.398 [2024-07-25 12:07:40.492900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1273303 ] 00:15:33.398 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:33.398 Zero copy mechanism will not be used. 00:15:33.398 [2024-07-25 12:07:40.580783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:33.398 [2024-07-25 12:07:40.669859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:33.655 [2024-07-25 12:07:40.734546] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:33.655 [2024-07-25 12:07:40.734574] bdev_raid.c:1292:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:34.220 12:07:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:34.220 12:07:41 -- common/autotest_common.sh@852 -- # return 0 00:15:34.220 12:07:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:34.220 12:07:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:15:34.220 12:07:41 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:34.220 BaseBdev1_malloc 00:15:34.220 12:07:41 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:34.476 [2024-07-25 12:07:41.616687] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:34.476 [2024-07-25 12:07:41.616724] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.476 [2024-07-25 12:07:41.616757] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x205ba00 00:15:34.476 [2024-07-25 12:07:41.616765] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.476 [2024-07-25 12:07:41.617985] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.476 [2024-07-25 12:07:41.618007] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:34.476 BaseBdev1 00:15:34.476 12:07:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:34.476 12:07:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:15:34.476 12:07:41 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:34.476 BaseBdev2_malloc 00:15:34.733 12:07:41 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:34.733 [2024-07-25 12:07:41.938543] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:34.733 [2024-07-25 12:07:41.938579] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.733 [2024-07-25 12:07:41.938595] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x205c5f0 00:15:34.733 [2024-07-25 12:07:41.938603] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.733 [2024-07-25 12:07:41.939761] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.733 [2024-07-25 12:07:41.939784] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:34.733 BaseBdev2 00:15:34.733 12:07:41 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:34.733 12:07:41 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:15:34.733 12:07:41 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:34.991 BaseBdev3_malloc 00:15:34.991 12:07:42 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:34.991 [2024-07-25 12:07:42.260528] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:34.991 [2024-07-25 12:07:42.260564] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:34.991 [2024-07-25 12:07:42.260593] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2127ab0 00:15:34.991 [2024-07-25 12:07:42.260601] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:34.991 [2024-07-25 12:07:42.261736] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:34.991 [2024-07-25 12:07:42.261759] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:34.991 BaseBdev3 00:15:34.991 12:07:42 -- bdev/bdev_raid.sh@548 -- # for bdev in "${base_bdevs[@]}" 00:15:34.991 12:07:42 -- bdev/bdev_raid.sh@549 -- # '[' true = true ']' 00:15:34.991 12:07:42 -- bdev/bdev_raid.sh@550 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:35.249 BaseBdev4_malloc 00:15:35.249 12:07:42 -- bdev/bdev_raid.sh@551 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:35.508 [2024-07-25 12:07:42.594256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:35.508 [2024-07-25 12:07:42.594301] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.508 [2024-07-25 12:07:42.594331] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x205cd80 00:15:35.508 [2024-07-25 12:07:42.594340] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.508 [2024-07-25 12:07:42.595499] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.508 [2024-07-25 12:07:42.595523] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:35.508 BaseBdev4 00:15:35.508 12:07:42 -- bdev/bdev_raid.sh@558 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b spare_malloc 00:15:35.508 spare_malloc 00:15:35.508 12:07:42 -- bdev/bdev_raid.sh@559 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:35.767 spare_delay 00:15:35.767 12:07:42 -- bdev/bdev_raid.sh@560 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:35.767 [2024-07-25 12:07:43.067111] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:35.767 [2024-07-25 12:07:43.067146] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:35.767 [2024-07-25 12:07:43.067160] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2052630 00:15:35.767 [2024-07-25 12:07:43.067168] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:35.767 [2024-07-25 12:07:43.068330] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:35.767 [2024-07-25 12:07:43.068353] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:35.767 spare 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@563 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 00:15:36.026 [2024-07-25 12:07:43.219540] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:36.026 [2024-07-25 12:07:43.220504] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:36.026 [2024-07-25 12:07:43.220543] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:36.026 [2024-07-25 12:07:43.220572] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:36.026 [2024-07-25 12:07:43.220714] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x20563c0 00:15:36.026 [2024-07-25 12:07:43.220721] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:36.026 [2024-07-25 12:07:43.220864] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x20562d0 00:15:36.026 [2024-07-25 12:07:43.220963] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20563c0 00:15:36.026 [2024-07-25 12:07:43.220969] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20563c0 00:15:36.026 [2024-07-25 12:07:43.221033] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@564 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=4 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.026 12:07:43 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.285 12:07:43 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:36.285 "name": "raid_bdev1", 00:15:36.285 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:36.285 "strip_size_kb": 0, 00:15:36.285 "state": "online", 00:15:36.285 "raid_level": "raid1", 00:15:36.285 "superblock": true, 00:15:36.285 "num_base_bdevs": 4, 00:15:36.285 "num_base_bdevs_discovered": 4, 00:15:36.285 "num_base_bdevs_operational": 4, 00:15:36.285 "base_bdevs_list": [ 00:15:36.285 { 00:15:36.285 "name": "BaseBdev1", 00:15:36.285 "uuid": "edc4f488-8d76-592b-87bd-c07acd8a7c4f", 00:15:36.285 "is_configured": true, 00:15:36.285 "data_offset": 2048, 00:15:36.285 "data_size": 63488 00:15:36.285 }, 00:15:36.285 { 00:15:36.285 "name": "BaseBdev2", 00:15:36.285 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:36.285 "is_configured": true, 00:15:36.285 "data_offset": 2048, 00:15:36.285 "data_size": 63488 00:15:36.285 }, 00:15:36.285 { 00:15:36.285 "name": "BaseBdev3", 00:15:36.285 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:36.285 "is_configured": true, 00:15:36.285 "data_offset": 2048, 00:15:36.285 "data_size": 63488 00:15:36.285 }, 00:15:36.285 { 00:15:36.285 "name": "BaseBdev4", 00:15:36.285 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:36.285 "is_configured": true, 00:15:36.285 "data_offset": 2048, 00:15:36.285 "data_size": 63488 00:15:36.285 } 00:15:36.285 ] 00:15:36.285 }' 00:15:36.285 12:07:43 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:36.285 12:07:43 -- common/autotest_common.sh@10 -- # set +x 00:15:36.852 12:07:43 -- bdev/bdev_raid.sh@567 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:36.852 12:07:43 -- bdev/bdev_raid.sh@567 -- # jq -r '.[].num_blocks' 00:15:36.852 [2024-07-25 12:07:44.045832] bdev_raid.c: 993:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:36.852 12:07:44 -- bdev/bdev_raid.sh@567 -- # raid_bdev_size=63488 00:15:36.852 12:07:44 -- bdev/bdev_raid.sh@570 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:36.852 12:07:44 -- bdev/bdev_raid.sh@570 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:37.110 12:07:44 -- bdev/bdev_raid.sh@570 -- # data_offset=2048 00:15:37.110 12:07:44 -- bdev/bdev_raid.sh@572 -- # '[' true = true ']' 00:15:37.110 12:07:44 -- bdev/bdev_raid.sh@591 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:15:37.110 12:07:44 -- bdev/bdev_raid.sh@574 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:37.110 [2024-07-25 12:07:44.316276] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2207dc0 00:15:37.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:37.110 Zero copy mechanism will not be used. 00:15:37.110 Running I/O for 60 seconds... 00:15:37.110 [2024-07-25 12:07:44.383382] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.110 [2024-07-25 12:07:44.394072] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2207dc0 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@594 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:37.369 "name": "raid_bdev1", 00:15:37.369 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:37.369 "strip_size_kb": 0, 00:15:37.369 "state": "online", 00:15:37.369 "raid_level": "raid1", 00:15:37.369 "superblock": true, 00:15:37.369 "num_base_bdevs": 4, 00:15:37.369 "num_base_bdevs_discovered": 3, 00:15:37.369 "num_base_bdevs_operational": 3, 00:15:37.369 "base_bdevs_list": [ 00:15:37.369 { 00:15:37.369 "name": null, 00:15:37.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.369 "is_configured": false, 00:15:37.369 "data_offset": 2048, 00:15:37.369 "data_size": 63488 00:15:37.369 }, 00:15:37.369 { 00:15:37.369 "name": "BaseBdev2", 00:15:37.369 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:37.369 "is_configured": true, 00:15:37.369 "data_offset": 2048, 00:15:37.369 "data_size": 63488 00:15:37.369 }, 00:15:37.369 { 00:15:37.369 "name": "BaseBdev3", 00:15:37.369 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:37.369 "is_configured": true, 00:15:37.369 "data_offset": 2048, 00:15:37.369 "data_size": 63488 00:15:37.369 }, 00:15:37.369 { 00:15:37.369 "name": "BaseBdev4", 00:15:37.369 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:37.369 "is_configured": true, 00:15:37.369 "data_offset": 2048, 00:15:37.369 "data_size": 63488 00:15:37.369 } 00:15:37.369 ] 00:15:37.369 }' 00:15:37.369 12:07:44 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:37.369 12:07:44 -- common/autotest_common.sh@10 -- # set +x 00:15:37.935 12:07:45 -- bdev/bdev_raid.sh@597 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:38.193 [2024-07-25 12:07:45.263188] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:38.193 [2024-07-25 12:07:45.263223] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:38.193 12:07:45 -- bdev/bdev_raid.sh@598 -- # sleep 1 00:15:38.193 [2024-07-25 12:07:45.324918] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2058f40 00:15:38.193 [2024-07-25 12:07:45.326700] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.193 [2024-07-25 12:07:45.435466] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:38.193 [2024-07-25 12:07:45.436651] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:38.451 [2024-07-25 12:07:45.666117] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:38.451 [2024-07-25 12:07:45.666366] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:38.709 [2024-07-25 12:07:45.984929] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:38.709 [2024-07-25 12:07:45.985369] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:38.967 [2024-07-25 12:07:46.201377] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:38.967 [2024-07-25 12:07:46.201909] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@601 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:39.237 "name": "raid_bdev1", 00:15:39.237 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:39.237 "strip_size_kb": 0, 00:15:39.237 "state": "online", 00:15:39.237 "raid_level": "raid1", 00:15:39.237 "superblock": true, 00:15:39.237 "num_base_bdevs": 4, 00:15:39.237 "num_base_bdevs_discovered": 4, 00:15:39.237 "num_base_bdevs_operational": 4, 00:15:39.237 "process": { 00:15:39.237 "type": "rebuild", 00:15:39.237 "target": "spare", 00:15:39.237 "progress": { 00:15:39.237 "blocks": 12288, 00:15:39.237 "percent": 19 00:15:39.237 } 00:15:39.237 }, 00:15:39.237 "base_bdevs_list": [ 00:15:39.237 { 00:15:39.237 "name": "spare", 00:15:39.237 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:39.237 "is_configured": true, 00:15:39.237 "data_offset": 2048, 00:15:39.237 "data_size": 63488 00:15:39.237 }, 00:15:39.237 { 00:15:39.237 "name": "BaseBdev2", 00:15:39.237 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:39.237 "is_configured": true, 00:15:39.237 "data_offset": 2048, 00:15:39.237 "data_size": 63488 00:15:39.237 }, 00:15:39.237 { 00:15:39.237 "name": "BaseBdev3", 00:15:39.237 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:39.237 "is_configured": true, 00:15:39.237 "data_offset": 2048, 00:15:39.237 "data_size": 63488 00:15:39.237 }, 00:15:39.237 { 00:15:39.237 "name": "BaseBdev4", 00:15:39.237 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:39.237 "is_configured": true, 00:15:39.237 "data_offset": 2048, 00:15:39.237 "data_size": 63488 00:15:39.237 } 00:15:39.237 ] 00:15:39.237 }' 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:39.237 [2024-07-25 12:07:46.516836] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:39.237 [2024-07-25 12:07:46.517299] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:39.237 12:07:46 -- bdev/bdev_raid.sh@604 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:15:39.536 [2024-07-25 12:07:46.648350] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:39.536 [2024-07-25 12:07:46.695188] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:39.536 [2024-07-25 12:07:46.750762] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:39.536 [2024-07-25 12:07:46.750914] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:39.536 [2024-07-25 12:07:46.761643] bdev_raid.c:2294:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:39.536 [2024-07-25 12:07:46.771563] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:39.536 [2024-07-25 12:07:46.783127] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x2207dc0 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@607 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.536 12:07:46 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.837 12:07:47 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:39.837 "name": "raid_bdev1", 00:15:39.837 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:39.837 "strip_size_kb": 0, 00:15:39.837 "state": "online", 00:15:39.837 "raid_level": "raid1", 00:15:39.837 "superblock": true, 00:15:39.837 "num_base_bdevs": 4, 00:15:39.837 "num_base_bdevs_discovered": 3, 00:15:39.837 "num_base_bdevs_operational": 3, 00:15:39.837 "base_bdevs_list": [ 00:15:39.837 { 00:15:39.837 "name": null, 00:15:39.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.837 "is_configured": false, 00:15:39.837 "data_offset": 2048, 00:15:39.837 "data_size": 63488 00:15:39.837 }, 00:15:39.837 { 00:15:39.837 "name": "BaseBdev2", 00:15:39.837 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:39.837 "is_configured": true, 00:15:39.837 "data_offset": 2048, 00:15:39.837 "data_size": 63488 00:15:39.837 }, 00:15:39.837 { 00:15:39.837 "name": "BaseBdev3", 00:15:39.837 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:39.837 "is_configured": true, 00:15:39.837 "data_offset": 2048, 00:15:39.837 "data_size": 63488 00:15:39.837 }, 00:15:39.837 { 00:15:39.837 "name": "BaseBdev4", 00:15:39.837 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:39.837 "is_configured": true, 00:15:39.837 "data_offset": 2048, 00:15:39.837 "data_size": 63488 00:15:39.837 } 00:15:39.837 ] 00:15:39.837 }' 00:15:39.837 12:07:47 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:39.837 12:07:47 -- common/autotest_common.sh@10 -- # set +x 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@610 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:40.404 "name": "raid_bdev1", 00:15:40.404 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:40.404 "strip_size_kb": 0, 00:15:40.404 "state": "online", 00:15:40.404 "raid_level": "raid1", 00:15:40.404 "superblock": true, 00:15:40.404 "num_base_bdevs": 4, 00:15:40.404 "num_base_bdevs_discovered": 3, 00:15:40.404 "num_base_bdevs_operational": 3, 00:15:40.404 "base_bdevs_list": [ 00:15:40.404 { 00:15:40.404 "name": null, 00:15:40.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.404 "is_configured": false, 00:15:40.404 "data_offset": 2048, 00:15:40.404 "data_size": 63488 00:15:40.404 }, 00:15:40.404 { 00:15:40.404 "name": "BaseBdev2", 00:15:40.404 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:40.404 "is_configured": true, 00:15:40.404 "data_offset": 2048, 00:15:40.404 "data_size": 63488 00:15:40.404 }, 00:15:40.404 { 00:15:40.404 "name": "BaseBdev3", 00:15:40.404 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:40.404 "is_configured": true, 00:15:40.404 "data_offset": 2048, 00:15:40.404 "data_size": 63488 00:15:40.404 }, 00:15:40.404 { 00:15:40.404 "name": "BaseBdev4", 00:15:40.404 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:40.404 "is_configured": true, 00:15:40.404 "data_offset": 2048, 00:15:40.404 "data_size": 63488 00:15:40.404 } 00:15:40.404 ] 00:15:40.404 }' 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:40.404 12:07:47 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:40.662 12:07:47 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:40.662 12:07:47 -- bdev/bdev_raid.sh@613 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:15:40.662 [2024-07-25 12:07:47.879926] bdev_raid.c:3095:raid_bdev_attach_base_bdev: *DEBUG*: attach_base_device: spare 00:15:40.662 [2024-07-25 12:07:47.879962] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:40.662 12:07:47 -- bdev/bdev_raid.sh@614 -- # sleep 1 00:15:40.662 [2024-07-25 12:07:47.914841] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2057200 00:15:40.662 [2024-07-25 12:07:47.916010] bdev_raid.c:2603:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.921 [2024-07-25 12:07:48.025324] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:40.921 [2024-07-25 12:07:48.026533] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:41.180 [2024-07-25 12:07:48.237111] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.180 [2024-07-25 12:07:48.237342] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:41.438 [2024-07-25 12:07:48.595840] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:41.697 [2024-07-25 12:07:48.839423] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@615 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.697 12:07:48 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.697 [2024-07-25 12:07:48.967831] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:41.956 "name": "raid_bdev1", 00:15:41.956 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:41.956 "strip_size_kb": 0, 00:15:41.956 "state": "online", 00:15:41.956 "raid_level": "raid1", 00:15:41.956 "superblock": true, 00:15:41.956 "num_base_bdevs": 4, 00:15:41.956 "num_base_bdevs_discovered": 4, 00:15:41.956 "num_base_bdevs_operational": 4, 00:15:41.956 "process": { 00:15:41.956 "type": "rebuild", 00:15:41.956 "target": "spare", 00:15:41.956 "progress": { 00:15:41.956 "blocks": 16384, 00:15:41.956 "percent": 25 00:15:41.956 } 00:15:41.956 }, 00:15:41.956 "base_bdevs_list": [ 00:15:41.956 { 00:15:41.956 "name": "spare", 00:15:41.956 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:41.956 "is_configured": true, 00:15:41.956 "data_offset": 2048, 00:15:41.956 "data_size": 63488 00:15:41.956 }, 00:15:41.956 { 00:15:41.956 "name": "BaseBdev2", 00:15:41.956 "uuid": "ad9d25d4-da8d-5eff-aafa-3ea5261f0d9a", 00:15:41.956 "is_configured": true, 00:15:41.956 "data_offset": 2048, 00:15:41.956 "data_size": 63488 00:15:41.956 }, 00:15:41.956 { 00:15:41.956 "name": "BaseBdev3", 00:15:41.956 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:41.956 "is_configured": true, 00:15:41.956 "data_offset": 2048, 00:15:41.956 "data_size": 63488 00:15:41.956 }, 00:15:41.956 { 00:15:41.956 "name": "BaseBdev4", 00:15:41.956 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:41.956 "is_configured": true, 00:15:41.956 "data_offset": 2048, 00:15:41.956 "data_size": 63488 00:15:41.956 } 00:15:41.956 ] 00:15:41.956 }' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@617 -- # '[' true = true ']' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@617 -- # '[' = false ']' 00:15:41.956 /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdev_raid.sh: line 617: [: =: unary operator expected 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@642 -- # local num_base_bdevs_operational=4 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@644 -- # '[' raid1 = raid1 ']' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@644 -- # '[' 4 -gt 2 ']' 00:15:41.956 12:07:49 -- bdev/bdev_raid.sh@646 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:42.219 [2024-07-25 12:07:49.299400] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:42.219 [2024-07-25 12:07:49.309686] bdev_raid.c:1981:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:42.482 [2024-07-25 12:07:49.654805] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2207dc0 00:15:42.482 [2024-07-25 12:07:49.654840] bdev_raid.c:1835:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x2057200 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@649 -- # base_bdevs[1]= 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@650 -- # (( num_base_bdevs_operational-- )) 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@653 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:42.750 "name": "raid_bdev1", 00:15:42.750 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:42.750 "strip_size_kb": 0, 00:15:42.750 "state": "online", 00:15:42.750 "raid_level": "raid1", 00:15:42.750 "superblock": true, 00:15:42.750 "num_base_bdevs": 4, 00:15:42.750 "num_base_bdevs_discovered": 3, 00:15:42.750 "num_base_bdevs_operational": 3, 00:15:42.750 "process": { 00:15:42.750 "type": "rebuild", 00:15:42.750 "target": "spare", 00:15:42.750 "progress": { 00:15:42.750 "blocks": 26624, 00:15:42.750 "percent": 41 00:15:42.750 } 00:15:42.750 }, 00:15:42.750 "base_bdevs_list": [ 00:15:42.750 { 00:15:42.750 "name": "spare", 00:15:42.750 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:42.750 "is_configured": true, 00:15:42.750 "data_offset": 2048, 00:15:42.750 "data_size": 63488 00:15:42.750 }, 00:15:42.750 { 00:15:42.750 "name": null, 00:15:42.750 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.750 "is_configured": false, 00:15:42.750 "data_offset": 2048, 00:15:42.750 "data_size": 63488 00:15:42.750 }, 00:15:42.750 { 00:15:42.750 "name": "BaseBdev3", 00:15:42.750 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:42.750 "is_configured": true, 00:15:42.750 "data_offset": 2048, 00:15:42.750 "data_size": 63488 00:15:42.750 }, 00:15:42.750 { 00:15:42.750 "name": "BaseBdev4", 00:15:42.750 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:42.750 "is_configured": true, 00:15:42.750 "data_offset": 2048, 00:15:42.750 "data_size": 63488 00:15:42.750 } 00:15:42.750 ] 00:15:42.750 }' 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:42.750 12:07:49 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:42.750 [2024-07-25 12:07:50.004902] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@657 -- # local timeout=409 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.750 12:07:50 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.011 [2024-07-25 12:07:50.214738] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:43.011 12:07:50 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:43.011 "name": "raid_bdev1", 00:15:43.012 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:43.012 "strip_size_kb": 0, 00:15:43.012 "state": "online", 00:15:43.012 "raid_level": "raid1", 00:15:43.012 "superblock": true, 00:15:43.012 "num_base_bdevs": 4, 00:15:43.012 "num_base_bdevs_discovered": 3, 00:15:43.012 "num_base_bdevs_operational": 3, 00:15:43.012 "process": { 00:15:43.012 "type": "rebuild", 00:15:43.012 "target": "spare", 00:15:43.012 "progress": { 00:15:43.012 "blocks": 30720, 00:15:43.012 "percent": 48 00:15:43.012 } 00:15:43.012 }, 00:15:43.012 "base_bdevs_list": [ 00:15:43.012 { 00:15:43.012 "name": "spare", 00:15:43.012 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 }, 00:15:43.012 { 00:15:43.012 "name": null, 00:15:43.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.012 "is_configured": false, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 }, 00:15:43.012 { 00:15:43.012 "name": "BaseBdev3", 00:15:43.012 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 }, 00:15:43.012 { 00:15:43.012 "name": "BaseBdev4", 00:15:43.012 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:43.012 "is_configured": true, 00:15:43.012 "data_offset": 2048, 00:15:43.012 "data_size": 63488 00:15:43.012 } 00:15:43.012 ] 00:15:43.012 }' 00:15:43.012 12:07:50 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:43.012 12:07:50 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:43.012 12:07:50 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:43.012 12:07:50 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:43.012 12:07:50 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:43.948 [2024-07-25 12:07:51.003711] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:43.948 [2024-07-25 12:07:51.219672] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:44.206 [2024-07-25 12:07:51.323650] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:44.206 [2024-07-25 12:07:51.324061] bdev_raid.c: 723:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:44.206 "name": "raid_bdev1", 00:15:44.206 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:44.206 "strip_size_kb": 0, 00:15:44.206 "state": "online", 00:15:44.206 "raid_level": "raid1", 00:15:44.206 "superblock": true, 00:15:44.206 "num_base_bdevs": 4, 00:15:44.206 "num_base_bdevs_discovered": 3, 00:15:44.206 "num_base_bdevs_operational": 3, 00:15:44.206 "process": { 00:15:44.206 "type": "rebuild", 00:15:44.206 "target": "spare", 00:15:44.206 "progress": { 00:15:44.206 "blocks": 53248, 00:15:44.206 "percent": 83 00:15:44.206 } 00:15:44.206 }, 00:15:44.206 "base_bdevs_list": [ 00:15:44.206 { 00:15:44.206 "name": "spare", 00:15:44.206 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:44.206 "is_configured": true, 00:15:44.206 "data_offset": 2048, 00:15:44.206 "data_size": 63488 00:15:44.206 }, 00:15:44.206 { 00:15:44.206 "name": null, 00:15:44.206 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.206 "is_configured": false, 00:15:44.206 "data_offset": 2048, 00:15:44.206 "data_size": 63488 00:15:44.206 }, 00:15:44.206 { 00:15:44.206 "name": "BaseBdev3", 00:15:44.206 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:44.206 "is_configured": true, 00:15:44.206 "data_offset": 2048, 00:15:44.206 "data_size": 63488 00:15:44.206 }, 00:15:44.206 { 00:15:44.206 "name": "BaseBdev4", 00:15:44.206 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:44.206 "is_configured": true, 00:15:44.206 "data_offset": 2048, 00:15:44.206 "data_size": 63488 00:15:44.206 } 00:15:44.206 ] 00:15:44.206 }' 00:15:44.206 12:07:51 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:44.472 12:07:51 -- bdev/bdev_raid.sh@190 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:44.472 12:07:51 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:44.472 12:07:51 -- bdev/bdev_raid.sh@191 -- # [[ spare == \s\p\a\r\e ]] 00:15:44.472 12:07:51 -- bdev/bdev_raid.sh@662 -- # sleep 1 00:15:44.733 [2024-07-25 12:07:51.977197] bdev_raid.c:2568:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:44.991 [2024-07-25 12:07:52.077450] bdev_raid.c:2285:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:44.991 [2024-07-25 12:07:52.078765] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@658 -- # (( SECONDS < timeout )) 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@184 -- # local process_type=rebuild 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@185 -- # local target=spare 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:45.556 "name": "raid_bdev1", 00:15:45.556 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:45.556 "strip_size_kb": 0, 00:15:45.556 "state": "online", 00:15:45.556 "raid_level": "raid1", 00:15:45.556 "superblock": true, 00:15:45.556 "num_base_bdevs": 4, 00:15:45.556 "num_base_bdevs_discovered": 3, 00:15:45.556 "num_base_bdevs_operational": 3, 00:15:45.556 "base_bdevs_list": [ 00:15:45.556 { 00:15:45.556 "name": "spare", 00:15:45.556 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:45.556 "is_configured": true, 00:15:45.556 "data_offset": 2048, 00:15:45.556 "data_size": 63488 00:15:45.556 }, 00:15:45.556 { 00:15:45.556 "name": null, 00:15:45.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.556 "is_configured": false, 00:15:45.556 "data_offset": 2048, 00:15:45.556 "data_size": 63488 00:15:45.556 }, 00:15:45.556 { 00:15:45.556 "name": "BaseBdev3", 00:15:45.556 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:45.556 "is_configured": true, 00:15:45.556 "data_offset": 2048, 00:15:45.556 "data_size": 63488 00:15:45.556 }, 00:15:45.556 { 00:15:45.556 "name": "BaseBdev4", 00:15:45.556 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:45.556 "is_configured": true, 00:15:45.556 "data_offset": 2048, 00:15:45.556 "data_size": 63488 00:15:45.556 } 00:15:45.556 ] 00:15:45.556 }' 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@190 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@191 -- # [[ none == \s\p\a\r\e ]] 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@660 -- # break 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@666 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.556 12:07:52 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:45.814 12:07:52 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:45.814 "name": "raid_bdev1", 00:15:45.814 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:45.814 "strip_size_kb": 0, 00:15:45.814 "state": "online", 00:15:45.814 "raid_level": "raid1", 00:15:45.814 "superblock": true, 00:15:45.814 "num_base_bdevs": 4, 00:15:45.814 "num_base_bdevs_discovered": 3, 00:15:45.814 "num_base_bdevs_operational": 3, 00:15:45.814 "base_bdevs_list": [ 00:15:45.814 { 00:15:45.814 "name": "spare", 00:15:45.814 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:45.814 "is_configured": true, 00:15:45.814 "data_offset": 2048, 00:15:45.814 "data_size": 63488 00:15:45.814 }, 00:15:45.814 { 00:15:45.814 "name": null, 00:15:45.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.814 "is_configured": false, 00:15:45.814 "data_offset": 2048, 00:15:45.814 "data_size": 63488 00:15:45.814 }, 00:15:45.814 { 00:15:45.814 "name": "BaseBdev3", 00:15:45.814 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:45.814 "is_configured": true, 00:15:45.814 "data_offset": 2048, 00:15:45.814 "data_size": 63488 00:15:45.814 }, 00:15:45.814 { 00:15:45.814 "name": "BaseBdev4", 00:15:45.814 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:45.814 "is_configured": true, 00:15:45.814 "data_offset": 2048, 00:15:45.814 "data_size": 63488 00:15:45.814 } 00:15:45.814 ] 00:15:45.814 }' 00:15:45.814 12:07:52 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@667 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.814 12:07:53 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:46.072 12:07:53 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:46.072 "name": "raid_bdev1", 00:15:46.072 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:46.072 "strip_size_kb": 0, 00:15:46.072 "state": "online", 00:15:46.072 "raid_level": "raid1", 00:15:46.072 "superblock": true, 00:15:46.072 "num_base_bdevs": 4, 00:15:46.072 "num_base_bdevs_discovered": 3, 00:15:46.072 "num_base_bdevs_operational": 3, 00:15:46.072 "base_bdevs_list": [ 00:15:46.072 { 00:15:46.072 "name": "spare", 00:15:46.072 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:46.072 "is_configured": true, 00:15:46.072 "data_offset": 2048, 00:15:46.072 "data_size": 63488 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "name": null, 00:15:46.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.072 "is_configured": false, 00:15:46.072 "data_offset": 2048, 00:15:46.072 "data_size": 63488 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "name": "BaseBdev3", 00:15:46.072 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:46.072 "is_configured": true, 00:15:46.072 "data_offset": 2048, 00:15:46.072 "data_size": 63488 00:15:46.072 }, 00:15:46.072 { 00:15:46.072 "name": "BaseBdev4", 00:15:46.072 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:46.072 "is_configured": true, 00:15:46.072 "data_offset": 2048, 00:15:46.072 "data_size": 63488 00:15:46.072 } 00:15:46.072 ] 00:15:46.072 }' 00:15:46.072 12:07:53 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:46.072 12:07:53 -- common/autotest_common.sh@10 -- # set +x 00:15:46.637 12:07:53 -- bdev/bdev_raid.sh@670 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:46.637 [2024-07-25 12:07:53.891244] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:46.637 [2024-07-25 12:07:53.891277] bdev_raid.c:1734:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:46.637 00:15:46.637 Latency(us) 00:15:46.637 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:46.637 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:46.637 raid_bdev1 : 9.59 107.97 323.92 0.00 0.00 13109.94 254.66 114887.46 00:15:46.637 =================================================================================================================== 00:15:46.637 Total : 107.97 323.92 0.00 0.00 13109.94 254.66 114887.46 00:15:46.637 [2024-07-25 12:07:53.930165] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.637 [2024-07-25 12:07:53.930185] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:46.637 [2024-07-25 12:07:53.930247] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:46.637 [2024-07-25 12:07:53.930255] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20563c0 name raid_bdev1, state offline 00:15:46.637 0 00:15:46.895 12:07:53 -- bdev/bdev_raid.sh@671 -- # jq length 00:15:46.895 12:07:53 -- bdev/bdev_raid.sh@671 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.895 12:07:54 -- bdev/bdev_raid.sh@671 -- # [[ 0 == 0 ]] 00:15:46.895 12:07:54 -- bdev/bdev_raid.sh@673 -- # '[' true = true ']' 00:15:46.895 12:07:54 -- bdev/bdev_raid.sh@675 -- # nbd_start_disks /var/tmp/spdk-raid.sock spare /dev/nbd0 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@12 -- # local i 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:46.895 12:07:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk spare /dev/nbd0 00:15:47.154 /dev/nbd0 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:47.154 12:07:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:15:47.154 12:07:54 -- common/autotest_common.sh@857 -- # local i 00:15:47.154 12:07:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:47.154 12:07:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:47.154 12:07:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:15:47.154 12:07:54 -- common/autotest_common.sh@861 -- # break 00:15:47.154 12:07:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:47.154 12:07:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:47.154 12:07:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.154 1+0 records in 00:15:47.154 1+0 records out 00:15:47.154 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000186412 s, 22.0 MB/s 00:15:47.154 12:07:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.154 12:07:54 -- common/autotest_common.sh@874 -- # size=4096 00:15:47.154 12:07:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.154 12:07:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:47.154 12:07:54 -- common/autotest_common.sh@877 -- # return 0 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@677 -- # '[' -z '' ']' 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@678 -- # continue 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev3 ']' 00:15:47.154 12:07:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev3 /dev/nbd1 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@12 -- # local i 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.154 12:07:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:47.413 /dev/nbd1 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.413 12:07:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:47.413 12:07:54 -- common/autotest_common.sh@857 -- # local i 00:15:47.413 12:07:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:47.413 12:07:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:47.413 12:07:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:47.413 12:07:54 -- common/autotest_common.sh@861 -- # break 00:15:47.413 12:07:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:47.413 12:07:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:47.413 12:07:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.413 1+0 records in 00:15:47.413 1+0 records out 00:15:47.413 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000156285 s, 26.2 MB/s 00:15:47.413 12:07:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.413 12:07:54 -- common/autotest_common.sh@874 -- # size=4096 00:15:47.413 12:07:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.413 12:07:54 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:47.413 12:07:54 -- common/autotest_common.sh@877 -- # return 0 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.413 12:07:54 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:47.413 12:07:54 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@51 -- # local i 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.413 12:07:54 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@41 -- # break 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.671 12:07:54 -- bdev/bdev_raid.sh@676 -- # for bdev in "${base_bdevs[@]:1}" 00:15:47.671 12:07:54 -- bdev/bdev_raid.sh@677 -- # '[' -z BaseBdev4 ']' 00:15:47.671 12:07:54 -- bdev/bdev_raid.sh@680 -- # nbd_start_disks /var/tmp/spdk-raid.sock BaseBdev4 /dev/nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@12 -- # local i 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:47.671 /dev/nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:47.671 12:07:54 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:47.671 12:07:54 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:15:47.671 12:07:54 -- common/autotest_common.sh@857 -- # local i 00:15:47.671 12:07:54 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:15:47.671 12:07:54 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:15:47.671 12:07:54 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:15:47.930 12:07:54 -- common/autotest_common.sh@861 -- # break 00:15:47.930 12:07:54 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:15:47.930 12:07:54 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:15:47.930 12:07:54 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:47.930 1+0 records in 00:15:47.930 1+0 records out 00:15:47.930 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000247511 s, 16.5 MB/s 00:15:47.930 12:07:54 -- common/autotest_common.sh@874 -- # stat -c %s /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.930 12:07:54 -- common/autotest_common.sh@874 -- # size=4096 00:15:47.930 12:07:54 -- common/autotest_common.sh@875 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/nbdtest 00:15:47.930 12:07:55 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:15:47.930 12:07:55 -- common/autotest_common.sh@877 -- # return 0 00:15:47.930 12:07:55 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:47.930 12:07:55 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:47.930 12:07:55 -- bdev/bdev_raid.sh@681 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:47.930 12:07:55 -- bdev/bdev_raid.sh@682 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd1 00:15:47.930 12:07:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:47.930 12:07:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@51 -- # local i 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd1 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@41 -- # break 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:15:47.931 12:07:55 -- bdev/bdev_raid.sh@684 -- # nbd_stop_disks /var/tmp/spdk-raid.sock /dev/nbd0 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-raid.sock 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@51 -- # local i 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:47.931 12:07:55 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock nbd_stop_disk /dev/nbd0 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@41 -- # break 00:15:48.189 12:07:55 -- bdev/nbd_common.sh@45 -- # return 0 00:15:48.189 12:07:55 -- bdev/bdev_raid.sh@692 -- # '[' true = true ']' 00:15:48.189 12:07:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:48.189 12:07:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev1 ']' 00:15:48.189 12:07:55 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.448 [2024-07-25 12:07:55.715417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.448 [2024-07-25 12:07:55.715456] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.448 [2024-07-25 12:07:55.715472] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x205bc30 00:15:48.448 [2024-07-25 12:07:55.715480] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.448 [2024-07-25 12:07:55.716704] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.448 [2024-07-25 12:07:55.716728] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.448 [2024-07-25 12:07:55.716782] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:48.448 [2024-07-25 12:07:55.716801] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.448 BaseBdev1 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@695 -- # '[' -z '' ']' 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@696 -- # continue 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev3 ']' 00:15:48.448 12:07:55 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev3 00:15:48.707 12:07:55 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.966 [2024-07-25 12:07:56.036255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.966 [2024-07-25 12:07:56.036288] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.966 [2024-07-25 12:07:56.036318] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2057180 00:15:48.966 [2024-07-25 12:07:56.036326] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.966 [2024-07-25 12:07:56.036555] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.966 [2024-07-25 12:07:56.036567] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.966 [2024-07-25 12:07:56.036624] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev3 00:15:48.966 [2024-07-25 12:07:56.036631] bdev_raid.c:3237:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev3 (4) greater than existing raid bdev raid_bdev1 (1) 00:15:48.966 [2024-07-25 12:07:56.036638] bdev_raid.c:2137:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:48.966 [2024-07-25 12:07:56.036650] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f3720 name raid_bdev1, state configuring 00:15:48.966 [2024-07-25 12:07:56.036672] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.966 BaseBdev3 00:15:48.966 12:07:56 -- bdev/bdev_raid.sh@694 -- # for bdev in "${base_bdevs[@]}" 00:15:48.966 12:07:56 -- bdev/bdev_raid.sh@695 -- # '[' -z BaseBdev4 ']' 00:15:48.966 12:07:56 -- bdev/bdev_raid.sh@698 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev4 00:15:48.966 12:07:56 -- bdev/bdev_raid.sh@699 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:49.225 [2024-07-25 12:07:56.357114] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:49.225 [2024-07-25 12:07:56.357151] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.225 [2024-07-25 12:07:56.357167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x205b070 00:15:49.225 [2024-07-25 12:07:56.357175] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.225 [2024-07-25 12:07:56.357419] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.225 [2024-07-25 12:07:56.357432] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:49.225 [2024-07-25 12:07:56.357476] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev BaseBdev4 00:15:49.225 [2024-07-25 12:07:56.357489] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:49.225 BaseBdev4 00:15:49.225 12:07:56 -- bdev/bdev_raid.sh@701 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:15:49.483 12:07:56 -- bdev/bdev_raid.sh@702 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:15:49.483 [2024-07-25 12:07:56.694021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:49.483 [2024-07-25 12:07:56.694055] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:49.483 [2024-07-25 12:07:56.694085] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2056850 00:15:49.483 [2024-07-25 12:07:56.694093] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:49.483 [2024-07-25 12:07:56.694376] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:49.483 [2024-07-25 12:07:56.694388] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:49.483 [2024-07-25 12:07:56.694444] bdev_raid.c:3342:raid_bdev_examine_load_sb_cb: *DEBUG*: raid superblock found on bdev spare 00:15:49.483 [2024-07-25 12:07:56.694457] bdev_raid.c:2939:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:49.483 spare 00:15:49.483 12:07:56 -- bdev/bdev_raid.sh@704 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:49.483 12:07:56 -- bdev/bdev_raid.sh@117 -- # local raid_bdev_name=raid_bdev1 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@118 -- # local expected_state=online 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@119 -- # local raid_level=raid1 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@120 -- # local strip_size=0 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@121 -- # local num_base_bdevs_operational=3 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@122 -- # local raid_bdev_info 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@124 -- # local num_base_bdevs_discovered 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@125 -- # local tmp 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@127 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:49.484 12:07:56 -- bdev/bdev_raid.sh@127 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:49.742 [2024-07-25 12:07:56.794766] bdev_raid.c:1584:raid_bdev_configure_cont: *DEBUG*: io device register 0x20f39a0 00:15:49.742 [2024-07-25 12:07:56.794779] bdev_raid.c:1585:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:49.742 [2024-07-25 12:07:56.794931] bdev_raid.c: 232:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2056ae0 00:15:49.742 [2024-07-25 12:07:56.795049] bdev_raid.c:1614:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x20f39a0 00:15:49.742 [2024-07-25 12:07:56.795056] bdev_raid.c:1615:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x20f39a0 00:15:49.742 [2024-07-25 12:07:56.795141] bdev_raid.c: 316:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:49.742 12:07:56 -- bdev/bdev_raid.sh@127 -- # raid_bdev_info='{ 00:15:49.742 "name": "raid_bdev1", 00:15:49.742 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:49.742 "strip_size_kb": 0, 00:15:49.742 "state": "online", 00:15:49.742 "raid_level": "raid1", 00:15:49.742 "superblock": true, 00:15:49.742 "num_base_bdevs": 4, 00:15:49.742 "num_base_bdevs_discovered": 3, 00:15:49.742 "num_base_bdevs_operational": 3, 00:15:49.742 "base_bdevs_list": [ 00:15:49.742 { 00:15:49.742 "name": "spare", 00:15:49.742 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:49.742 "is_configured": true, 00:15:49.742 "data_offset": 2048, 00:15:49.742 "data_size": 63488 00:15:49.742 }, 00:15:49.742 { 00:15:49.742 "name": null, 00:15:49.742 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:49.742 "is_configured": false, 00:15:49.743 "data_offset": 2048, 00:15:49.743 "data_size": 63488 00:15:49.743 }, 00:15:49.743 { 00:15:49.743 "name": "BaseBdev3", 00:15:49.743 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:49.743 "is_configured": true, 00:15:49.743 "data_offset": 2048, 00:15:49.743 "data_size": 63488 00:15:49.743 }, 00:15:49.743 { 00:15:49.743 "name": "BaseBdev4", 00:15:49.743 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:49.743 "is_configured": true, 00:15:49.743 "data_offset": 2048, 00:15:49.743 "data_size": 63488 00:15:49.743 } 00:15:49.743 ] 00:15:49.743 }' 00:15:49.743 12:07:56 -- bdev/bdev_raid.sh@129 -- # xtrace_disable 00:15:49.743 12:07:56 -- common/autotest_common.sh@10 -- # set +x 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@705 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@183 -- # local raid_bdev_name=raid_bdev1 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@184 -- # local process_type=none 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@185 -- # local target=none 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@186 -- # local raid_bdev_info 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@188 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:50.309 12:07:57 -- bdev/bdev_raid.sh@188 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.310 12:07:57 -- bdev/bdev_raid.sh@188 -- # raid_bdev_info='{ 00:15:50.310 "name": "raid_bdev1", 00:15:50.310 "uuid": "c642aae4-f69c-4b54-b8bc-8fb51bee8b71", 00:15:50.310 "strip_size_kb": 0, 00:15:50.310 "state": "online", 00:15:50.310 "raid_level": "raid1", 00:15:50.310 "superblock": true, 00:15:50.310 "num_base_bdevs": 4, 00:15:50.310 "num_base_bdevs_discovered": 3, 00:15:50.310 "num_base_bdevs_operational": 3, 00:15:50.310 "base_bdevs_list": [ 00:15:50.310 { 00:15:50.310 "name": "spare", 00:15:50.310 "uuid": "1d1f44ec-6e88-5173-81ee-11c0bae3f626", 00:15:50.310 "is_configured": true, 00:15:50.310 "data_offset": 2048, 00:15:50.310 "data_size": 63488 00:15:50.310 }, 00:15:50.310 { 00:15:50.310 "name": null, 00:15:50.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.310 "is_configured": false, 00:15:50.310 "data_offset": 2048, 00:15:50.310 "data_size": 63488 00:15:50.310 }, 00:15:50.310 { 00:15:50.310 "name": "BaseBdev3", 00:15:50.310 "uuid": "4ecb93a7-626a-5480-abeb-8e06df27bd11", 00:15:50.310 "is_configured": true, 00:15:50.310 "data_offset": 2048, 00:15:50.310 "data_size": 63488 00:15:50.310 }, 00:15:50.310 { 00:15:50.310 "name": "BaseBdev4", 00:15:50.310 "uuid": "dbd10f64-c6a9-53f9-b623-afdab309b3b5", 00:15:50.310 "is_configured": true, 00:15:50.310 "data_offset": 2048, 00:15:50.310 "data_size": 63488 00:15:50.310 } 00:15:50.310 ] 00:15:50.310 }' 00:15:50.310 12:07:57 -- bdev/bdev_raid.sh@190 -- # jq -r '.process.type // "none"' 00:15:50.310 12:07:57 -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:15:50.310 12:07:57 -- bdev/bdev_raid.sh@191 -- # jq -r '.process.target // "none"' 00:15:50.568 12:07:57 -- bdev/bdev_raid.sh@191 -- # [[ none == \n\o\n\e ]] 00:15:50.569 12:07:57 -- bdev/bdev_raid.sh@706 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.569 12:07:57 -- bdev/bdev_raid.sh@706 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:50.569 12:07:57 -- bdev/bdev_raid.sh@706 -- # [[ spare == \s\p\a\r\e ]] 00:15:50.569 12:07:57 -- bdev/bdev_raid.sh@709 -- # killprocess 1273303 00:15:50.569 12:07:57 -- common/autotest_common.sh@926 -- # '[' -z 1273303 ']' 00:15:50.569 12:07:57 -- common/autotest_common.sh@930 -- # kill -0 1273303 00:15:50.569 12:07:57 -- common/autotest_common.sh@931 -- # uname 00:15:50.569 12:07:57 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:50.569 12:07:57 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1273303 00:15:50.569 12:07:57 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:50.569 12:07:57 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:50.569 12:07:57 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1273303' 00:15:50.569 killing process with pid 1273303 00:15:50.569 12:07:57 -- common/autotest_common.sh@945 -- # kill 1273303 00:15:50.569 Received shutdown signal, test time was about 13.500752 seconds 00:15:50.569 00:15:50.569 Latency(us) 00:15:50.569 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:50.569 =================================================================================================================== 00:15:50.569 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:50.569 [2024-07-25 12:07:57.851842] bdev_raid.c:1234:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:50.569 [2024-07-25 12:07:57.851896] bdev_raid.c: 449:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:50.569 [2024-07-25 12:07:57.851948] bdev_raid.c: 426:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:50.569 [2024-07-25 12:07:57.851956] bdev_raid.c: 351:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x20f39a0 name raid_bdev1, state offline 00:15:50.569 12:07:57 -- common/autotest_common.sh@950 -- # wait 1273303 00:15:50.828 [2024-07-25 12:07:57.897255] bdev_raid.c:1251:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:50.828 12:07:58 -- bdev/bdev_raid.sh@711 -- # return 0 00:15:50.828 00:15:50.828 real 0m17.687s 00:15:50.828 user 0m27.154s 00:15:50.828 sys 0m3.158s 00:15:50.828 12:07:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:50.828 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:15:50.828 ************************************ 00:15:50.828 END TEST raid_rebuild_test_sb_io 00:15:50.828 ************************************ 00:15:51.087 12:07:58 -- bdev/bdev_raid.sh@742 -- # '[' n == y ']' 00:15:51.087 12:07:58 -- bdev/bdev_raid.sh@754 -- # rm -f /raidrandtest 00:15:51.087 00:15:51.087 real 6m26.695s 00:15:51.087 user 10m24.940s 00:15:51.087 sys 1m18.845s 00:15:51.087 12:07:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:51.087 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:15:51.087 ************************************ 00:15:51.087 END TEST bdev_raid 00:15:51.087 ************************************ 00:15:51.087 12:07:58 -- spdk/autotest.sh@197 -- # run_test bdevperf_config /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:15:51.087 12:07:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:15:51.087 12:07:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:51.087 12:07:58 -- common/autotest_common.sh@10 -- # set +x 00:15:51.087 ************************************ 00:15:51.087 START TEST bdevperf_config 00:15:51.087 ************************************ 00:15:51.087 12:07:58 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test_config.sh 00:15:51.087 * Looking for test storage... 00:15:51.087 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@10 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/common.sh 00:15:51.087 12:07:58 -- bdevperf/common.sh@5 -- # bdevperf=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@12 -- # jsonconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@13 -- # testconf=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:15:51.087 12:07:58 -- bdevperf/common.sh@8 -- # local job_section=global 00:15:51.087 12:07:58 -- bdevperf/common.sh@9 -- # local rw=read 00:15:51.087 12:07:58 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:51.087 12:07:58 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:15:51.087 12:07:58 -- bdevperf/common.sh@13 -- # cat 00:15:51.087 12:07:58 -- bdevperf/common.sh@18 -- # job='[global]' 00:15:51.087 12:07:58 -- bdevperf/common.sh@19 -- # echo 00:15:51.087 00:15:51.087 12:07:58 -- bdevperf/common.sh@20 -- # cat 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@18 -- # create_job job0 00:15:51.087 12:07:58 -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:51.087 12:07:58 -- bdevperf/common.sh@9 -- # local rw= 00:15:51.087 12:07:58 -- bdevperf/common.sh@10 -- # local filename= 00:15:51.087 12:07:58 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:51.087 12:07:58 -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:51.087 12:07:58 -- bdevperf/common.sh@19 -- # echo 00:15:51.087 00:15:51.087 12:07:58 -- bdevperf/common.sh@20 -- # cat 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@19 -- # create_job job1 00:15:51.087 12:07:58 -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:51.087 12:07:58 -- bdevperf/common.sh@9 -- # local rw= 00:15:51.087 12:07:58 -- bdevperf/common.sh@10 -- # local filename= 00:15:51.087 12:07:58 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:51.087 12:07:58 -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:51.087 12:07:58 -- bdevperf/common.sh@19 -- # echo 00:15:51.087 00:15:51.087 12:07:58 -- bdevperf/common.sh@20 -- # cat 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@20 -- # create_job job2 00:15:51.087 12:07:58 -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:51.087 12:07:58 -- bdevperf/common.sh@9 -- # local rw= 00:15:51.087 12:07:58 -- bdevperf/common.sh@10 -- # local filename= 00:15:51.087 12:07:58 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:51.087 12:07:58 -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:51.087 12:07:58 -- bdevperf/common.sh@19 -- # echo 00:15:51.087 00:15:51.087 12:07:58 -- bdevperf/common.sh@20 -- # cat 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@21 -- # create_job job3 00:15:51.087 12:07:58 -- bdevperf/common.sh@8 -- # local job_section=job3 00:15:51.087 12:07:58 -- bdevperf/common.sh@9 -- # local rw= 00:15:51.087 12:07:58 -- bdevperf/common.sh@10 -- # local filename= 00:15:51.087 12:07:58 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:15:51.087 12:07:58 -- bdevperf/common.sh@18 -- # job='[job3]' 00:15:51.087 12:07:58 -- bdevperf/common.sh@19 -- # echo 00:15:51.087 00:15:51.087 12:07:58 -- bdevperf/common.sh@20 -- # cat 00:15:51.087 12:07:58 -- bdevperf/test_config.sh@22 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:54.377 12:08:00 -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-25 12:07:58.372438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:54.377 [2024-07-25 12:07:58.372490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275994 ] 00:15:54.377 Using job config with 4 jobs 00:15:54.377 [2024-07-25 12:07:58.467528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.377 [2024-07-25 12:07:58.570253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.377 cpumask for '\''job0'\'' is too big 00:15:54.377 cpumask for '\''job1'\'' is too big 00:15:54.377 cpumask for '\''job2'\'' is too big 00:15:54.377 cpumask for '\''job3'\'' is too big 00:15:54.377 Running I/O for 2 seconds... 00:15:54.377 00:15:54.377 Latency(us) 00:15:54.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39374.52 38.45 0.00 0.00 6496.53 1196.74 9858.89 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39350.47 38.43 0.00 0.00 6491.46 1125.51 9061.06 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39387.84 38.46 0.00 0.00 6476.92 1118.39 9175.04 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.02 39364.55 38.44 0.00 0.00 6471.66 1132.63 9175.04 00:15:54.377 =================================================================================================================== 00:15:54.377 Total : 157477.38 153.79 0.00 0.00 6484.13 1118.39 9858.89' 00:15:54.377 12:08:00 -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-25 12:07:58.372438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:54.377 [2024-07-25 12:07:58.372490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275994 ] 00:15:54.377 Using job config with 4 jobs 00:15:54.377 [2024-07-25 12:07:58.467528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.377 [2024-07-25 12:07:58.570253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.377 cpumask for '\''job0'\'' is too big 00:15:54.377 cpumask for '\''job1'\'' is too big 00:15:54.377 cpumask for '\''job2'\'' is too big 00:15:54.377 cpumask for '\''job3'\'' is too big 00:15:54.377 Running I/O for 2 seconds... 00:15:54.377 00:15:54.377 Latency(us) 00:15:54.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39374.52 38.45 0.00 0.00 6496.53 1196.74 9858.89 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39350.47 38.43 0.00 0.00 6491.46 1125.51 9061.06 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39387.84 38.46 0.00 0.00 6476.92 1118.39 9175.04 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.02 39364.55 38.44 0.00 0.00 6471.66 1132.63 9175.04 00:15:54.377 =================================================================================================================== 00:15:54.377 Total : 157477.38 153.79 0.00 0.00 6484.13 1118.39 9858.89' 00:15:54.377 12:08:00 -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:07:58.372438] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:54.377 [2024-07-25 12:07:58.372490] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1275994 ] 00:15:54.377 Using job config with 4 jobs 00:15:54.377 [2024-07-25 12:07:58.467528] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.377 [2024-07-25 12:07:58.570253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.377 cpumask for '\''job0'\'' is too big 00:15:54.377 cpumask for '\''job1'\'' is too big 00:15:54.377 cpumask for '\''job2'\'' is too big 00:15:54.377 cpumask for '\''job3'\'' is too big 00:15:54.377 Running I/O for 2 seconds... 00:15:54.377 00:15:54.377 Latency(us) 00:15:54.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39374.52 38.45 0.00 0.00 6496.53 1196.74 9858.89 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39350.47 38.43 0.00 0.00 6491.46 1125.51 9061.06 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.01 39387.84 38.46 0.00 0.00 6476.92 1118.39 9175.04 00:15:54.377 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:54.377 Malloc0 : 2.02 39364.55 38.44 0.00 0.00 6471.66 1132.63 9175.04 00:15:54.377 =================================================================================================================== 00:15:54.377 Total : 157477.38 153.79 0.00 0.00 6484.13 1118.39 9858.89' 00:15:54.377 12:08:00 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:54.377 12:08:00 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:54.378 12:08:00 -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:15:54.378 12:08:00 -- bdevperf/test_config.sh@25 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -C -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:54.378 [2024-07-25 12:08:01.029434] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:54.378 [2024-07-25 12:08:01.029482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276362 ] 00:15:54.378 [2024-07-25 12:08:01.122764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:54.378 [2024-07-25 12:08:01.223232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:54.378 cpumask for 'job0' is too big 00:15:54.378 cpumask for 'job1' is too big 00:15:54.378 cpumask for 'job2' is too big 00:15:54.378 cpumask for 'job3' is too big 00:15:56.308 12:08:03 -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:15:56.308 Running I/O for 2 seconds... 00:15:56.308 00:15:56.308 Latency(us) 00:15:56.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:56.308 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:56.308 Malloc0 : 2.01 39391.15 38.47 0.00 0.00 6496.22 1218.11 10086.85 00:15:56.308 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:56.308 Malloc0 : 2.01 39403.00 38.48 0.00 0.00 6484.63 1118.39 8947.09 00:15:56.308 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:56.308 Malloc0 : 2.02 39380.78 38.46 0.00 0.00 6479.43 1168.25 7921.31 00:15:56.308 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:15:56.308 Malloc0 : 2.02 39358.64 38.44 0.00 0.00 6473.79 1139.76 7750.34 00:15:56.308 =================================================================================================================== 00:15:56.308 Total : 157533.56 153.84 0.00 0.00 6483.51 1118.39 10086.85' 00:15:56.308 12:08:03 -- bdevperf/test_config.sh@27 -- # cleanup 00:15:56.308 12:08:03 -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:56.566 12:08:03 -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:56.566 12:08:03 -- bdevperf/common.sh@9 -- # local rw=write 00:15:56.566 12:08:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:56.566 12:08:03 -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:56.566 12:08:03 -- bdevperf/common.sh@19 -- # echo 00:15:56.566 00:15:56.566 12:08:03 -- bdevperf/common.sh@20 -- # cat 00:15:56.566 12:08:03 -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:56.566 12:08:03 -- bdevperf/common.sh@9 -- # local rw=write 00:15:56.566 12:08:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:56.566 12:08:03 -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:56.566 12:08:03 -- bdevperf/common.sh@19 -- # echo 00:15:56.566 00:15:56.566 12:08:03 -- bdevperf/common.sh@20 -- # cat 00:15:56.566 12:08:03 -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:56.566 12:08:03 -- bdevperf/common.sh@9 -- # local rw=write 00:15:56.566 12:08:03 -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:15:56.566 12:08:03 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:56.566 12:08:03 -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:56.566 12:08:03 -- bdevperf/common.sh@19 -- # echo 00:15:56.566 00:15:56.566 12:08:03 -- bdevperf/common.sh@20 -- # cat 00:15:56.566 12:08:03 -- bdevperf/test_config.sh@32 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:59.094 12:08:06 -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-25 12:08:03.676716] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:59.094 [2024-07-25 12:08:03.676779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276725 ] 00:15:59.094 Using job config with 3 jobs 00:15:59.094 [2024-07-25 12:08:03.768860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.094 [2024-07-25 12:08:03.862939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.094 cpumask for '\''job0'\'' is too big 00:15:59.094 cpumask for '\''job1'\'' is too big 00:15:59.094 cpumask for '\''job2'\'' is too big 00:15:59.094 Running I/O for 2 seconds... 00:15:59.094 00:15:59.094 Latency(us) 00:15:59.094 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.094 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.094 Malloc0 : 2.01 53008.53 51.77 0.00 0.00 4826.36 1168.25 7123.48 00:15:59.094 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.094 Malloc0 : 2.01 52976.68 51.74 0.00 0.00 4822.42 1104.14 5955.23 00:15:59.094 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.094 Malloc0 : 2.01 52945.07 51.70 0.00 0.00 4818.74 1132.63 5698.78 00:15:59.094 =================================================================================================================== 00:15:59.094 Total : 158930.29 155.21 0.00 0.00 4822.51 1104.14 7123.48' 00:15:59.094 12:08:06 -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-25 12:08:03.676716] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:59.094 [2024-07-25 12:08:03.676779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276725 ] 00:15:59.094 Using job config with 3 jobs 00:15:59.094 [2024-07-25 12:08:03.768860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.094 [2024-07-25 12:08:03.862939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.094 cpumask for '\''job0'\'' is too big 00:15:59.094 cpumask for '\''job1'\'' is too big 00:15:59.094 cpumask for '\''job2'\'' is too big 00:15:59.094 Running I/O for 2 seconds... 00:15:59.094 00:15:59.095 Latency(us) 00:15:59.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 53008.53 51.77 0.00 0.00 4826.36 1168.25 7123.48 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 52976.68 51.74 0.00 0.00 4822.42 1104.14 5955.23 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 52945.07 51.70 0.00 0.00 4818.74 1132.63 5698.78 00:15:59.095 =================================================================================================================== 00:15:59.095 Total : 158930.29 155.21 0.00 0.00 4822.51 1104.14 7123.48' 00:15:59.095 12:08:06 -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:08:03.676716] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:15:59.095 [2024-07-25 12:08:03.676779] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1276725 ] 00:15:59.095 Using job config with 3 jobs 00:15:59.095 [2024-07-25 12:08:03.768860] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:59.095 [2024-07-25 12:08:03.862939] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:59.095 cpumask for '\''job0'\'' is too big 00:15:59.095 cpumask for '\''job1'\'' is too big 00:15:59.095 cpumask for '\''job2'\'' is too big 00:15:59.095 Running I/O for 2 seconds... 00:15:59.095 00:15:59.095 Latency(us) 00:15:59.095 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 53008.53 51.77 0.00 0.00 4826.36 1168.25 7123.48 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 52976.68 51.74 0.00 0.00 4822.42 1104.14 5955.23 00:15:59.095 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:15:59.095 Malloc0 : 2.01 52945.07 51.70 0.00 0.00 4818.74 1132.63 5698.78 00:15:59.095 =================================================================================================================== 00:15:59.095 Total : 158930.29 155.21 0.00 0.00 4822.51 1104.14 7123.48' 00:15:59.095 12:08:06 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:15:59.095 12:08:06 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@35 -- # cleanup 00:15:59.095 12:08:06 -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:15:59.095 12:08:06 -- bdevperf/common.sh@8 -- # local job_section=global 00:15:59.095 12:08:06 -- bdevperf/common.sh@9 -- # local rw=rw 00:15:59.095 12:08:06 -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:15:59.095 12:08:06 -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:15:59.095 12:08:06 -- bdevperf/common.sh@13 -- # cat 00:15:59.095 12:08:06 -- bdevperf/common.sh@18 -- # job='[global]' 00:15:59.095 12:08:06 -- bdevperf/common.sh@19 -- # echo 00:15:59.095 00:15:59.095 12:08:06 -- bdevperf/common.sh@20 -- # cat 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@38 -- # create_job job0 00:15:59.095 12:08:06 -- bdevperf/common.sh@8 -- # local job_section=job0 00:15:59.095 12:08:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:59.095 12:08:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:59.095 12:08:06 -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:15:59.095 12:08:06 -- bdevperf/common.sh@18 -- # job='[job0]' 00:15:59.095 12:08:06 -- bdevperf/common.sh@19 -- # echo 00:15:59.095 00:15:59.095 12:08:06 -- bdevperf/common.sh@20 -- # cat 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@39 -- # create_job job1 00:15:59.095 12:08:06 -- bdevperf/common.sh@8 -- # local job_section=job1 00:15:59.095 12:08:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:59.095 12:08:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:59.095 12:08:06 -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:15:59.095 12:08:06 -- bdevperf/common.sh@18 -- # job='[job1]' 00:15:59.095 12:08:06 -- bdevperf/common.sh@19 -- # echo 00:15:59.095 00:15:59.095 12:08:06 -- bdevperf/common.sh@20 -- # cat 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@40 -- # create_job job2 00:15:59.095 12:08:06 -- bdevperf/common.sh@8 -- # local job_section=job2 00:15:59.095 12:08:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:59.095 12:08:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:59.095 12:08:06 -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:15:59.095 12:08:06 -- bdevperf/common.sh@18 -- # job='[job2]' 00:15:59.095 12:08:06 -- bdevperf/common.sh@19 -- # echo 00:15:59.095 00:15:59.095 12:08:06 -- bdevperf/common.sh@20 -- # cat 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@41 -- # create_job job3 00:15:59.095 12:08:06 -- bdevperf/common.sh@8 -- # local job_section=job3 00:15:59.095 12:08:06 -- bdevperf/common.sh@9 -- # local rw= 00:15:59.095 12:08:06 -- bdevperf/common.sh@10 -- # local filename= 00:15:59.095 12:08:06 -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:15:59.095 12:08:06 -- bdevperf/common.sh@18 -- # job='[job3]' 00:15:59.095 12:08:06 -- bdevperf/common.sh@19 -- # echo 00:15:59.095 00:15:59.095 12:08:06 -- bdevperf/common.sh@20 -- # cat 00:15:59.095 12:08:06 -- bdevperf/test_config.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -t 2 --json /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/conf.json -j /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:16:02.380 12:08:09 -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-25 12:08:06.368370] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:02.380 [2024-07-25 12:08:06.368428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277094 ] 00:16:02.380 Using job config with 4 jobs 00:16:02.380 [2024-07-25 12:08:06.465238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.380 [2024-07-25 12:08:06.561959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.380 cpumask for '\''job0'\'' is too big 00:16:02.380 cpumask for '\''job1'\'' is too big 00:16:02.380 cpumask for '\''job2'\'' is too big 00:16:02.380 cpumask for '\''job3'\'' is too big 00:16:02.380 Running I/O for 2 seconds... 00:16:02.380 00:16:02.380 Latency(us) 00:16:02.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19467.08 19.01 0.00 0.00 13142.69 2507.46 20401.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19455.33 19.00 0.00 0.00 13141.47 2906.38 20287.67 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19443.97 18.99 0.00 0.00 13120.18 2293.76 18122.13 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19432.76 18.98 0.00 0.00 13121.34 2763.91 18122.13 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19421.83 18.97 0.00 0.00 13100.74 2336.50 15728.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19409.85 18.95 0.00 0.00 13100.02 2849.39 15728.64 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19398.84 18.94 0.00 0.00 13078.93 2293.76 13905.03 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19387.69 18.93 0.00 0.00 13079.66 2877.89 13905.03 00:16:02.380 =================================================================================================================== 00:16:02.380 Total : 155417.36 151.77 0.00 0.00 13110.63 2293.76 20401.64' 00:16:02.380 12:08:09 -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-25 12:08:06.368370] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:02.380 [2024-07-25 12:08:06.368428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277094 ] 00:16:02.380 Using job config with 4 jobs 00:16:02.380 [2024-07-25 12:08:06.465238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.380 [2024-07-25 12:08:06.561959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.380 cpumask for '\''job0'\'' is too big 00:16:02.380 cpumask for '\''job1'\'' is too big 00:16:02.380 cpumask for '\''job2'\'' is too big 00:16:02.380 cpumask for '\''job3'\'' is too big 00:16:02.380 Running I/O for 2 seconds... 00:16:02.380 00:16:02.380 Latency(us) 00:16:02.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19467.08 19.01 0.00 0.00 13142.69 2507.46 20401.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19455.33 19.00 0.00 0.00 13141.47 2906.38 20287.67 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19443.97 18.99 0.00 0.00 13120.18 2293.76 18122.13 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19432.76 18.98 0.00 0.00 13121.34 2763.91 18122.13 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19421.83 18.97 0.00 0.00 13100.74 2336.50 15728.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19409.85 18.95 0.00 0.00 13100.02 2849.39 15728.64 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19398.84 18.94 0.00 0.00 13078.93 2293.76 13905.03 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19387.69 18.93 0.00 0.00 13079.66 2877.89 13905.03 00:16:02.380 =================================================================================================================== 00:16:02.380 Total : 155417.36 151.77 0.00 0.00 13110.63 2293.76 20401.64' 00:16:02.380 12:08:09 -- bdevperf/common.sh@32 -- # echo '[2024-07-25 12:08:06.368370] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:02.380 [2024-07-25 12:08:06.368428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277094 ] 00:16:02.380 Using job config with 4 jobs 00:16:02.380 [2024-07-25 12:08:06.465238] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:02.380 [2024-07-25 12:08:06.561959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.380 cpumask for '\''job0'\'' is too big 00:16:02.380 cpumask for '\''job1'\'' is too big 00:16:02.380 cpumask for '\''job2'\'' is too big 00:16:02.380 cpumask for '\''job3'\'' is too big 00:16:02.380 Running I/O for 2 seconds... 00:16:02.380 00:16:02.380 Latency(us) 00:16:02.380 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19467.08 19.01 0.00 0.00 13142.69 2507.46 20401.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19455.33 19.00 0.00 0.00 13141.47 2906.38 20287.67 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19443.97 18.99 0.00 0.00 13120.18 2293.76 18122.13 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19432.76 18.98 0.00 0.00 13121.34 2763.91 18122.13 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19421.83 18.97 0.00 0.00 13100.74 2336.50 15728.64 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19409.85 18.95 0.00 0.00 13100.02 2849.39 15728.64 00:16:02.380 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc0 : 2.03 19398.84 18.94 0.00 0.00 13078.93 2293.76 13905.03 00:16:02.380 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:16:02.380 Malloc1 : 2.03 19387.69 18.93 0.00 0.00 13079.66 2877.89 13905.03 00:16:02.380 =================================================================================================================== 00:16:02.380 Total : 155417.36 151.77 0.00 0.00 13110.63 2293.76 20401.64' 00:16:02.380 12:08:09 -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:16:02.380 12:08:09 -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:16:02.380 12:08:09 -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:16:02.380 12:08:09 -- bdevperf/test_config.sh@44 -- # cleanup 00:16:02.380 12:08:09 -- bdevperf/common.sh@36 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevperf/test.conf 00:16:02.381 12:08:09 -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:16:02.381 00:16:02.381 real 0m10.813s 00:16:02.381 user 0m9.681s 00:16:02.381 sys 0m0.993s 00:16:02.381 12:08:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:02.381 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.381 ************************************ 00:16:02.381 END TEST bdevperf_config 00:16:02.381 ************************************ 00:16:02.381 12:08:09 -- spdk/autotest.sh@198 -- # uname -s 00:16:02.381 12:08:09 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:16:02.381 12:08:09 -- spdk/autotest.sh@199 -- # run_test reactor_set_interrupt /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:02.381 12:08:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:02.381 12:08:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:02.381 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.381 ************************************ 00:16:02.381 START TEST reactor_set_interrupt 00:16:02.381 ************************************ 00:16:02.381 12:08:09 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:02.381 * Looking for test storage... 00:16:02.381 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.381 12:08:09 -- interrupt/reactor_set_interrupt.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reactor_set_interrupt.sh 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:02.381 12:08:09 -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:16:02.381 12:08:09 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:02.381 12:08:09 -- common/autotest_common.sh@34 -- # set -e 00:16:02.381 12:08:09 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:02.381 12:08:09 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:02.381 12:08:09 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:02.381 12:08:09 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:16:02.381 12:08:09 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:02.381 12:08:09 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:16:02.381 12:08:09 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:02.381 12:08:09 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:02.381 12:08:09 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:02.381 12:08:09 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:02.381 12:08:09 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:02.381 12:08:09 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:02.381 12:08:09 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:02.381 12:08:09 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:02.381 12:08:09 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:02.381 12:08:09 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:02.381 12:08:09 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:02.381 12:08:09 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:02.381 12:08:09 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:02.381 12:08:09 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:02.381 12:08:09 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:16:02.381 12:08:09 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:02.381 12:08:09 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:02.381 12:08:09 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:02.381 12:08:09 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:16:02.381 12:08:09 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:02.381 12:08:09 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:02.381 12:08:09 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:02.381 12:08:09 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:02.381 12:08:09 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:02.381 12:08:09 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:02.381 12:08:09 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:02.381 12:08:09 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:02.381 12:08:09 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:02.381 12:08:09 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:02.381 12:08:09 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:02.381 12:08:09 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:16:02.381 12:08:09 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:16:02.381 12:08:09 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:02.381 12:08:09 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:02.381 12:08:09 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:02.381 12:08:09 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:02.381 12:08:09 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:02.381 12:08:09 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:02.381 12:08:09 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:16:02.381 12:08:09 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:16:02.381 12:08:09 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:02.381 12:08:09 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:16:02.381 12:08:09 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:16:02.381 12:08:09 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:16:02.381 12:08:09 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:16:02.381 12:08:09 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:16:02.381 12:08:09 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:16:02.381 12:08:09 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:16:02.381 12:08:09 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:16:02.381 12:08:09 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:16:02.381 12:08:09 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:16:02.381 12:08:09 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:16:02.381 12:08:09 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:16:02.381 12:08:09 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:16:02.381 12:08:09 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:16:02.381 12:08:09 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:16:02.381 12:08:09 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:16:02.381 12:08:09 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:16:02.381 12:08:09 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:02.381 12:08:09 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:16:02.381 12:08:09 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:16:02.381 12:08:09 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:16:02.381 12:08:09 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:16:02.381 12:08:09 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:16:02.381 12:08:09 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:16:02.381 12:08:09 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=y 00:16:02.381 12:08:09 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:16:02.381 12:08:09 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=y 00:16:02.381 12:08:09 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:16:02.381 12:08:09 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=y 00:16:02.381 12:08:09 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:16:02.381 12:08:09 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:16:02.381 12:08:09 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:16:02.381 12:08:09 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:16:02.381 12:08:09 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:16:02.381 12:08:09 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:16:02.381 12:08:09 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:02.381 12:08:09 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:02.381 12:08:09 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:16:02.381 12:08:09 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:02.381 12:08:09 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:02.381 12:08:09 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:02.381 12:08:09 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:02.382 12:08:09 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:02.382 12:08:09 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:02.382 12:08:09 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:02.382 12:08:09 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:16:02.382 12:08:09 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:02.382 #define SPDK_CONFIG_H 00:16:02.382 #define SPDK_CONFIG_APPS 1 00:16:02.382 #define SPDK_CONFIG_ARCH native 00:16:02.382 #undef SPDK_CONFIG_ASAN 00:16:02.382 #undef SPDK_CONFIG_AVAHI 00:16:02.382 #undef SPDK_CONFIG_CET 00:16:02.382 #define SPDK_CONFIG_COVERAGE 1 00:16:02.382 #define SPDK_CONFIG_CROSS_PREFIX 00:16:02.382 #define SPDK_CONFIG_CRYPTO 1 00:16:02.382 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:16:02.382 #undef SPDK_CONFIG_CUSTOMOCF 00:16:02.382 #undef SPDK_CONFIG_DAOS 00:16:02.382 #define SPDK_CONFIG_DAOS_DIR 00:16:02.382 #define SPDK_CONFIG_DEBUG 1 00:16:02.382 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:16:02.382 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:16:02.382 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:02.382 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:02.382 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:02.382 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:16:02.382 #define SPDK_CONFIG_EXAMPLES 1 00:16:02.382 #undef SPDK_CONFIG_FC 00:16:02.382 #define SPDK_CONFIG_FC_PATH 00:16:02.382 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:02.382 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:02.382 #undef SPDK_CONFIG_FUSE 00:16:02.382 #undef SPDK_CONFIG_FUZZER 00:16:02.382 #define SPDK_CONFIG_FUZZER_LIB 00:16:02.382 #undef SPDK_CONFIG_GOLANG 00:16:02.382 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:02.382 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:02.382 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:02.382 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:02.382 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:02.382 #define SPDK_CONFIG_IDXD 1 00:16:02.382 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:02.382 #define SPDK_CONFIG_IPSEC_MB 1 00:16:02.382 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:16:02.382 #define SPDK_CONFIG_ISAL 1 00:16:02.382 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:02.382 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:02.382 #define SPDK_CONFIG_LIBDIR 00:16:02.382 #undef SPDK_CONFIG_LTO 00:16:02.382 #define SPDK_CONFIG_MAX_LCORES 00:16:02.382 #define SPDK_CONFIG_NVME_CUSE 1 00:16:02.382 #undef SPDK_CONFIG_OCF 00:16:02.382 #define SPDK_CONFIG_OCF_PATH 00:16:02.382 #define SPDK_CONFIG_OPENSSL_PATH 00:16:02.382 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:02.382 #undef SPDK_CONFIG_PGO_USE 00:16:02.382 #define SPDK_CONFIG_PREFIX /usr/local 00:16:02.382 #undef SPDK_CONFIG_RAID5F 00:16:02.382 #undef SPDK_CONFIG_RBD 00:16:02.382 #define SPDK_CONFIG_RDMA 1 00:16:02.382 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:02.382 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:02.382 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:02.382 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:02.382 #define SPDK_CONFIG_SHARED 1 00:16:02.382 #undef SPDK_CONFIG_SMA 00:16:02.382 #define SPDK_CONFIG_TESTS 1 00:16:02.382 #undef SPDK_CONFIG_TSAN 00:16:02.382 #define SPDK_CONFIG_UBLK 1 00:16:02.382 #define SPDK_CONFIG_UBSAN 1 00:16:02.382 #undef SPDK_CONFIG_UNIT_TESTS 00:16:02.382 #undef SPDK_CONFIG_URING 00:16:02.382 #define SPDK_CONFIG_URING_PATH 00:16:02.382 #undef SPDK_CONFIG_URING_ZNS 00:16:02.382 #undef SPDK_CONFIG_USDT 00:16:02.382 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:16:02.382 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:16:02.382 #undef SPDK_CONFIG_VFIO_USER 00:16:02.382 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:02.382 #define SPDK_CONFIG_VHOST 1 00:16:02.382 #define SPDK_CONFIG_VIRTIO 1 00:16:02.382 #undef SPDK_CONFIG_VTUNE 00:16:02.382 #define SPDK_CONFIG_VTUNE_DIR 00:16:02.382 #define SPDK_CONFIG_WERROR 1 00:16:02.382 #define SPDK_CONFIG_WPDK_DIR 00:16:02.382 #undef SPDK_CONFIG_XNVME 00:16:02.382 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:02.382 12:08:09 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:02.382 12:08:09 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:16:02.382 12:08:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.382 12:08:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.382 12:08:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.382 12:08:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.382 12:08:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.382 12:08:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.382 12:08:09 -- paths/export.sh@5 -- # export PATH 00:16:02.382 12:08:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.382 12:08:09 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:16:02.382 12:08:09 -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:16:02.382 12:08:09 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:16:02.382 12:08:09 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:16:02.382 12:08:09 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:02.382 12:08:09 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:02.382 12:08:09 -- pm/common@16 -- # TEST_TAG=N/A 00:16:02.382 12:08:09 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:16:02.382 12:08:09 -- common/autotest_common.sh@52 -- # : 1 00:16:02.382 12:08:09 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:16:02.382 12:08:09 -- common/autotest_common.sh@56 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:02.382 12:08:09 -- common/autotest_common.sh@58 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:16:02.382 12:08:09 -- common/autotest_common.sh@60 -- # : 1 00:16:02.382 12:08:09 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:02.382 12:08:09 -- common/autotest_common.sh@62 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:16:02.382 12:08:09 -- common/autotest_common.sh@64 -- # : 00:16:02.382 12:08:09 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:16:02.382 12:08:09 -- common/autotest_common.sh@66 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:16:02.382 12:08:09 -- common/autotest_common.sh@68 -- # : 1 00:16:02.382 12:08:09 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:16:02.382 12:08:09 -- common/autotest_common.sh@70 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:16:02.382 12:08:09 -- common/autotest_common.sh@72 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:02.382 12:08:09 -- common/autotest_common.sh@74 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:16:02.382 12:08:09 -- common/autotest_common.sh@76 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:16:02.382 12:08:09 -- common/autotest_common.sh@78 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:16:02.382 12:08:09 -- common/autotest_common.sh@80 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:16:02.382 12:08:09 -- common/autotest_common.sh@82 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:16:02.382 12:08:09 -- common/autotest_common.sh@84 -- # : 0 00:16:02.382 12:08:09 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:16:02.383 12:08:09 -- common/autotest_common.sh@86 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:16:02.383 12:08:09 -- common/autotest_common.sh@88 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:16:02.383 12:08:09 -- common/autotest_common.sh@90 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:02.383 12:08:09 -- common/autotest_common.sh@92 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:16:02.383 12:08:09 -- common/autotest_common.sh@94 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:16:02.383 12:08:09 -- common/autotest_common.sh@96 -- # : rdma 00:16:02.383 12:08:09 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:02.383 12:08:09 -- common/autotest_common.sh@98 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:16:02.383 12:08:09 -- common/autotest_common.sh@100 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:16:02.383 12:08:09 -- common/autotest_common.sh@102 -- # : 1 00:16:02.383 12:08:09 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:16:02.383 12:08:09 -- common/autotest_common.sh@104 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:16:02.383 12:08:09 -- common/autotest_common.sh@106 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:16:02.383 12:08:09 -- common/autotest_common.sh@108 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:16:02.383 12:08:09 -- common/autotest_common.sh@110 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:16:02.383 12:08:09 -- common/autotest_common.sh@112 -- # : 1 00:16:02.383 12:08:09 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:02.383 12:08:09 -- common/autotest_common.sh@114 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:16:02.383 12:08:09 -- common/autotest_common.sh@116 -- # : 1 00:16:02.383 12:08:09 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:16:02.383 12:08:09 -- common/autotest_common.sh@118 -- # : 00:16:02.383 12:08:09 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:02.383 12:08:09 -- common/autotest_common.sh@120 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:16:02.383 12:08:09 -- common/autotest_common.sh@122 -- # : 1 00:16:02.383 12:08:09 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:16:02.383 12:08:09 -- common/autotest_common.sh@124 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:16:02.383 12:08:09 -- common/autotest_common.sh@126 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:16:02.383 12:08:09 -- common/autotest_common.sh@128 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:16:02.383 12:08:09 -- common/autotest_common.sh@130 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:16:02.383 12:08:09 -- common/autotest_common.sh@132 -- # : 00:16:02.383 12:08:09 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:16:02.383 12:08:09 -- common/autotest_common.sh@134 -- # : true 00:16:02.383 12:08:09 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:16:02.383 12:08:09 -- common/autotest_common.sh@136 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:16:02.383 12:08:09 -- common/autotest_common.sh@138 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:16:02.383 12:08:09 -- common/autotest_common.sh@140 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:16:02.383 12:08:09 -- common/autotest_common.sh@142 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:16:02.383 12:08:09 -- common/autotest_common.sh@144 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:16:02.383 12:08:09 -- common/autotest_common.sh@146 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:16:02.383 12:08:09 -- common/autotest_common.sh@148 -- # : 00:16:02.383 12:08:09 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:16:02.383 12:08:09 -- common/autotest_common.sh@150 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:16:02.383 12:08:09 -- common/autotest_common.sh@152 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:16:02.383 12:08:09 -- common/autotest_common.sh@154 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:16:02.383 12:08:09 -- common/autotest_common.sh@156 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:16:02.383 12:08:09 -- common/autotest_common.sh@158 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:16:02.383 12:08:09 -- common/autotest_common.sh@160 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:16:02.383 12:08:09 -- common/autotest_common.sh@163 -- # : 00:16:02.383 12:08:09 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:16:02.383 12:08:09 -- common/autotest_common.sh@165 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:16:02.383 12:08:09 -- common/autotest_common.sh@167 -- # : 0 00:16:02.383 12:08:09 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:02.383 12:08:09 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:02.383 12:08:09 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:02.383 12:08:09 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:02.383 12:08:09 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:16:02.383 12:08:09 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:16:02.383 12:08:09 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:02.383 12:08:09 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:16:02.383 12:08:09 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:02.383 12:08:09 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:02.383 12:08:09 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:02.383 12:08:09 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:02.383 12:08:09 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:02.383 12:08:09 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:16:02.383 12:08:09 -- common/autotest_common.sh@196 -- # cat 00:16:02.383 12:08:09 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:16:02.383 12:08:09 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:02.383 12:08:09 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:02.383 12:08:09 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:02.383 12:08:09 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:02.383 12:08:09 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:16:02.383 12:08:09 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:16:02.384 12:08:09 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:02.384 12:08:09 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:02.384 12:08:09 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:02.384 12:08:09 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:02.384 12:08:09 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:02.384 12:08:09 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:02.384 12:08:09 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:02.384 12:08:09 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:02.384 12:08:09 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:02.384 12:08:09 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:02.384 12:08:09 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:16:02.384 12:08:09 -- common/autotest_common.sh@249 -- # export valgrind= 00:16:02.384 12:08:09 -- common/autotest_common.sh@249 -- # valgrind= 00:16:02.384 12:08:09 -- common/autotest_common.sh@255 -- # uname -s 00:16:02.384 12:08:09 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:16:02.384 12:08:09 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:16:02.384 12:08:09 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@258 -- # [[ 1 -eq 1 ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@262 -- # export HUGE_EVEN_ALLOC=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@262 -- # HUGE_EVEN_ALLOC=yes 00:16:02.384 12:08:09 -- common/autotest_common.sh@265 -- # MAKE=make 00:16:02.384 12:08:09 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:16:02.384 12:08:09 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:16:02.384 12:08:09 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:16:02.384 12:08:09 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:16:02.384 12:08:09 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:16:02.384 12:08:09 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:16:02.384 12:08:09 -- common/autotest_common.sh@309 -- # [[ -z 1277504 ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@309 -- # kill -0 1277504 00:16:02.384 12:08:09 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:16:02.384 12:08:09 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:16:02.384 12:08:09 -- common/autotest_common.sh@322 -- # local mount target_dir 00:16:02.384 12:08:09 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:16:02.384 12:08:09 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:16:02.384 12:08:09 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:16:02.384 12:08:09 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:16:02.384 12:08:09 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.b0Epy9 00:16:02.384 12:08:09 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:02.384 12:08:09 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.b0Epy9/tests/interrupt /tmp/spdk.b0Epy9 00:16:02.384 12:08:09 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@318 -- # df -T 00:16:02.384 12:08:09 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=955527168 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=4328902656 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=83648847872 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508597248 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=10859749376 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=47251705856 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254298624 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=18892201984 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901721088 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=9519104 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253647360 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254298624 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=651264 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:16:02.384 12:08:09 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:16:02.384 12:08:09 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:16:02.384 12:08:09 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:02.384 12:08:09 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:16:02.384 * Looking for test storage... 00:16:02.384 12:08:09 -- common/autotest_common.sh@359 -- # local target_space new_size 00:16:02.384 12:08:09 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:16:02.384 12:08:09 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.384 12:08:09 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:02.384 12:08:09 -- common/autotest_common.sh@363 -- # mount=/ 00:16:02.384 12:08:09 -- common/autotest_common.sh@365 -- # target_space=83648847872 00:16:02.384 12:08:09 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:16:02.384 12:08:09 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:16:02.384 12:08:09 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:16:02.384 12:08:09 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:16:02.385 12:08:09 -- common/autotest_common.sh@372 -- # new_size=13074341888 00:16:02.385 12:08:09 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:02.385 12:08:09 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.385 12:08:09 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.385 12:08:09 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.385 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:02.385 12:08:09 -- common/autotest_common.sh@380 -- # return 0 00:16:02.385 12:08:09 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:16:02.385 12:08:09 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:16:02.385 12:08:09 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:02.385 12:08:09 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:02.385 12:08:09 -- common/autotest_common.sh@1672 -- # true 00:16:02.385 12:08:09 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:16:02.385 12:08:09 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:02.385 12:08:09 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:02.385 12:08:09 -- common/autotest_common.sh@27 -- # exec 00:16:02.385 12:08:09 -- common/autotest_common.sh@29 -- # exec 00:16:02.385 12:08:09 -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:02.385 12:08:09 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:02.385 12:08:09 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:02.385 12:08:09 -- common/autotest_common.sh@18 -- # set -x 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:16:02.385 12:08:09 -- interrupt/reactor_set_interrupt.sh@11 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:16:02.385 12:08:09 -- interrupt/reactor_set_interrupt.sh@11 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:16:02.385 12:08:09 -- interrupt/reactor_set_interrupt.sh@86 -- # start_intr_tgt 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=1277543 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@26 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:02.385 12:08:09 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 1277543 /var/tmp/spdk.sock 00:16:02.385 12:08:09 -- common/autotest_common.sh@819 -- # '[' -z 1277543 ']' 00:16:02.385 12:08:09 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:02.385 12:08:09 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:02.385 12:08:09 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:02.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:02.386 12:08:09 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:02.386 12:08:09 -- common/autotest_common.sh@10 -- # set +x 00:16:02.386 [2024-07-25 12:08:09.396003] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:02.386 [2024-07-25 12:08:09.396062] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277543 ] 00:16:02.386 [2024-07-25 12:08:09.483366] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:02.386 [2024-07-25 12:08:09.565785] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:02.386 [2024-07-25 12:08:09.565875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:02.386 [2024-07-25 12:08:09.565878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.386 [2024-07-25 12:08:09.634127] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:02.954 12:08:10 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:02.954 12:08:10 -- common/autotest_common.sh@852 -- # return 0 00:16:02.954 12:08:10 -- interrupt/reactor_set_interrupt.sh@87 -- # setup_bdev_mem 00:16:02.954 12:08:10 -- interrupt/interrupt_common.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:03.212 Malloc0 00:16:03.212 Malloc1 00:16:03.212 Malloc2 00:16:03.212 12:08:10 -- interrupt/reactor_set_interrupt.sh@88 -- # setup_bdev_aio 00:16:03.212 12:08:10 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:16:03.212 12:08:10 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:03.212 12:08:10 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:03.212 5000+0 records in 00:16:03.212 5000+0 records out 00:16:03.212 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0253855 s, 403 MB/s 00:16:03.212 12:08:10 -- interrupt/interrupt_common.sh@100 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:16:03.470 AIO0 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@90 -- # reactor_set_mode_without_threads 1277543 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@76 -- # reactor_set_intr_mode 1277543 without_thd 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1277543 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd=without_thd 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:03.470 12:08:10 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:16:03.470 12:08:10 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:16:03.728 12:08:10 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:03.728 12:08:10 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:16:03.728 12:08:10 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:16:03.728 12:08:11 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:03.728 12:08:11 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:03.728 spdk_thread ids are 1 on reactor0. 00:16:03.728 12:08:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:03.728 12:08:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1277543 0 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1277543 0 idle 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:03.728 12:08:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277543 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.31 reactor_0' 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@48 -- # echo 1277543 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.31 reactor_0 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:03.986 12:08:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:03.986 12:08:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1277543 1 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1277543 1 idle 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:03.986 12:08:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277549 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.00 reactor_1' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # echo 1277549 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.00 reactor_1 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:04.245 12:08:11 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:04.245 12:08:11 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1277543 2 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1277543 2 idle 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277550 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.00 reactor_2' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # echo 1277550 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.00 reactor_2 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:04.245 12:08:11 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:04.245 12:08:11 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' without_thdx '!=' x ']' 00:16:04.245 12:08:11 -- interrupt/reactor_set_interrupt.sh@35 -- # for i in "${thd0_ids[@]}" 00:16:04.245 12:08:11 -- interrupt/reactor_set_interrupt.sh@36 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x2 00:16:04.503 [2024-07-25 12:08:11.706344] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:04.503 12:08:11 -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:04.762 [2024-07-25 12:08:11.886270] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:04.762 [2024-07-25 12:08:11.886642] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:04.762 12:08:11 -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:04.762 [2024-07-25 12:08:12.054178] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:04.762 [2024-07-25 12:08:12.054299] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:04.762 12:08:12 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:04.762 12:08:12 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1277543 0 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 1277543 0 busy 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:04.762 12:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277543 root 20 0 128.2g 33408 20736 R 99.9 0.0 0:00.66 reactor_0' 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 1277543 root 20 0 128.2g 33408 20736 R 99.9 0.0 0:00.66 reactor_0 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:05.020 12:08:12 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:05.020 12:08:12 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1277543 2 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 1277543 2 busy 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:05.020 12:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277550 root 20 0 128.2g 33408 20736 R 99.9 0.0 0:00.35 reactor_2' 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 1277550 root 20 0 128.2g 33408 20736 R 99.9 0.0 0:00.35 reactor_2 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:05.277 12:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:05.277 12:08:12 -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:05.277 [2024-07-25 12:08:12.586190] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:05.277 [2024-07-25 12:08:12.586302] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:05.535 12:08:12 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' without_thdx '!=' x ']' 00:16:05.535 12:08:12 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1277543 2 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1277543 2 idle 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277550 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.53 reactor_2' 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@48 -- # echo 1277550 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:00.53 reactor_2 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:05.535 12:08:12 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:05.535 12:08:12 -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:05.793 [2024-07-25 12:08:12.938181] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:05.793 [2024-07-25 12:08:12.938311] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:05.793 12:08:12 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' without_thdx '!=' x ']' 00:16:05.793 12:08:12 -- interrupt/reactor_set_interrupt.sh@65 -- # for i in "${thd0_ids[@]}" 00:16:05.793 12:08:12 -- interrupt/reactor_set_interrupt.sh@66 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_set_cpumask -i 1 -m 0x1 00:16:05.793 [2024-07-25 12:08:13.102604] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:06.051 12:08:13 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1277543 0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1277543 0 idle 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@33 -- # local pid=1277543 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1277543 -w 256 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1277543 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:01.36 reactor_0' 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@48 -- # echo 1277543 root 20 0 128.2g 33408 20736 S 0.0 0.0 0:01.36 reactor_0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:06.051 12:08:13 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:06.051 12:08:13 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:06.051 12:08:13 -- interrupt/reactor_set_interrupt.sh@77 -- # return 0 00:16:06.051 12:08:13 -- interrupt/reactor_set_interrupt.sh@92 -- # trap - SIGINT SIGTERM EXIT 00:16:06.051 12:08:13 -- interrupt/reactor_set_interrupt.sh@93 -- # killprocess 1277543 00:16:06.051 12:08:13 -- common/autotest_common.sh@926 -- # '[' -z 1277543 ']' 00:16:06.051 12:08:13 -- common/autotest_common.sh@930 -- # kill -0 1277543 00:16:06.051 12:08:13 -- common/autotest_common.sh@931 -- # uname 00:16:06.051 12:08:13 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:06.051 12:08:13 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1277543 00:16:06.051 12:08:13 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:06.051 12:08:13 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:06.051 12:08:13 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1277543' 00:16:06.051 killing process with pid 1277543 00:16:06.051 12:08:13 -- common/autotest_common.sh@945 -- # kill 1277543 00:16:06.051 12:08:13 -- common/autotest_common.sh@950 -- # wait 1277543 00:16:06.615 12:08:13 -- interrupt/reactor_set_interrupt.sh@94 -- # cleanup 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@19 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:16:06.615 12:08:13 -- interrupt/reactor_set_interrupt.sh@97 -- # start_intr_tgt 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=1278186 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@26 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:06.615 12:08:13 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 1278186 /var/tmp/spdk.sock 00:16:06.615 12:08:13 -- common/autotest_common.sh@819 -- # '[' -z 1278186 ']' 00:16:06.615 12:08:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:06.615 12:08:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:06.615 12:08:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:06.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:06.615 12:08:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:06.615 12:08:13 -- common/autotest_common.sh@10 -- # set +x 00:16:06.615 [2024-07-25 12:08:13.663789] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:06.615 [2024-07-25 12:08:13.663844] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278186 ] 00:16:06.615 [2024-07-25 12:08:13.746733] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:06.615 [2024-07-25 12:08:13.836543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:06.615 [2024-07-25 12:08:13.836562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:06.615 [2024-07-25 12:08:13.836564] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:06.615 [2024-07-25 12:08:13.902336] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:07.549 12:08:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:07.549 12:08:14 -- common/autotest_common.sh@852 -- # return 0 00:16:07.549 12:08:14 -- interrupt/reactor_set_interrupt.sh@98 -- # setup_bdev_mem 00:16:07.549 12:08:14 -- interrupt/interrupt_common.sh@90 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:07.549 Malloc0 00:16:07.549 Malloc1 00:16:07.549 Malloc2 00:16:07.549 12:08:14 -- interrupt/reactor_set_interrupt.sh@99 -- # setup_bdev_aio 00:16:07.549 12:08:14 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:16:07.549 12:08:14 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:07.549 12:08:14 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:07.549 5000+0 records in 00:16:07.549 5000+0 records out 00:16:07.549 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0271001 s, 378 MB/s 00:16:07.549 12:08:14 -- interrupt/interrupt_common.sh@100 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:16:07.806 AIO0 00:16:07.806 12:08:14 -- interrupt/reactor_set_interrupt.sh@101 -- # reactor_set_mode_with_threads 1278186 00:16:07.806 12:08:14 -- interrupt/reactor_set_interrupt.sh@81 -- # reactor_set_intr_mode 1278186 00:16:07.806 12:08:14 -- interrupt/reactor_set_interrupt.sh@14 -- # local spdk_pid=1278186 00:16:07.806 12:08:14 -- interrupt/reactor_set_interrupt.sh@15 -- # local without_thd= 00:16:07.807 12:08:14 -- interrupt/reactor_set_interrupt.sh@17 -- # thd0_ids=($(reactor_get_thread_ids $r0_mask)) 00:16:07.807 12:08:14 -- interrupt/reactor_set_interrupt.sh@17 -- # reactor_get_thread_ids 0x1 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x1 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=1 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:16:07.807 12:08:14 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 1 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:07.807 12:08:15 -- interrupt/interrupt_common.sh@85 -- # echo 1 00:16:07.807 12:08:15 -- interrupt/reactor_set_interrupt.sh@18 -- # thd2_ids=($(reactor_get_thread_ids $r2_mask)) 00:16:07.807 12:08:15 -- interrupt/reactor_set_interrupt.sh@18 -- # reactor_get_thread_ids 0x4 00:16:07.807 12:08:15 -- interrupt/interrupt_common.sh@78 -- # local reactor_cpumask=0x4 00:16:07.807 12:08:15 -- interrupt/interrupt_common.sh@79 -- # local grep_str 00:16:07.807 12:08:15 -- interrupt/interrupt_common.sh@81 -- # reactor_cpumask=4 00:16:07.807 12:08:15 -- interrupt/interrupt_common.sh@82 -- # jq_str='.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@85 -- # jq --arg reactor_cpumask 4 '.threads|.[]|select(.cpumask == $reactor_cpumask)|.id' 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@85 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py thread_get_stats 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@85 -- # echo '' 00:16:08.065 12:08:15 -- interrupt/reactor_set_interrupt.sh@21 -- # [[ 1 -eq 0 ]] 00:16:08.065 12:08:15 -- interrupt/reactor_set_interrupt.sh@25 -- # echo 'spdk_thread ids are 1 on reactor0.' 00:16:08.065 spdk_thread ids are 1 on reactor0. 00:16:08.065 12:08:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:08.065 12:08:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1278186 0 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1278186 0 idle 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:08.065 12:08:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278186 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.30 reactor_0' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # echo 1278186 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.30 reactor_0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:08.387 12:08:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:08.387 12:08:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1278186 1 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1278186 1 idle 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@34 -- # local idx=1 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_1 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278227 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.00 reactor_1' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # echo 1278227 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.00 reactor_1 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:08.387 12:08:15 -- interrupt/reactor_set_interrupt.sh@29 -- # for i in {0..2} 00:16:08.387 12:08:15 -- interrupt/reactor_set_interrupt.sh@30 -- # reactor_is_idle 1278186 2 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1278186 2 idle 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:08.387 12:08:15 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278228 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.00 reactor_2' 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@48 -- # echo 1278228 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.00 reactor_2 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:08.645 12:08:15 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:08.645 12:08:15 -- interrupt/reactor_set_interrupt.sh@33 -- # '[' x '!=' x ']' 00:16:08.645 12:08:15 -- interrupt/reactor_set_interrupt.sh@43 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 -d 00:16:08.904 [2024-07-25 12:08:16.002425] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 0. 00:16:08.904 [2024-07-25 12:08:16.002543] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to poll mode from intr mode. 00:16:08.904 [2024-07-25 12:08:16.002720] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:08.904 12:08:16 -- interrupt/reactor_set_interrupt.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 -d 00:16:08.904 [2024-07-25 12:08:16.174790] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to disable interrupt mode on reactor 2. 00:16:08.904 [2024-07-25 12:08:16.174951] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:08.904 12:08:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:08.904 12:08:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1278186 0 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 1278186 0 busy 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:08.904 12:08:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:09.162 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278186 root 20 0 128.2g 34560 21312 R 99.9 0.0 0:00.66 reactor_0' 00:16:09.162 12:08:16 -- interrupt/interrupt_common.sh@48 -- # echo 1278186 root 20 0 128.2g 34560 21312 R 99.9 0.0 0:00.66 reactor_0 00:16:09.162 12:08:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:09.162 12:08:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:09.163 12:08:16 -- interrupt/reactor_set_interrupt.sh@46 -- # for i in 0 2 00:16:09.163 12:08:16 -- interrupt/reactor_set_interrupt.sh@47 -- # reactor_is_busy 1278186 2 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@70 -- # reactor_is_busy_or_idle 1278186 2 busy 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@35 -- # local state=busy 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@37 -- # [[ busy != \b\u\s\y ]] 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:09.163 12:08:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:09.421 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278228 root 20 0 128.2g 34560 21312 R 99.9 0.0 0:00.36 reactor_2' 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@48 -- # echo 1278228 root 20 0 128.2g 34560 21312 R 99.9 0.0 0:00.36 reactor_2 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=99.9 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=99 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@51 -- # [[ busy = \b\u\s\y ]] 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@51 -- # [[ 99 -lt 70 ]] 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@53 -- # [[ busy = \i\d\l\e ]] 00:16:09.422 12:08:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:09.422 12:08:16 -- interrupt/reactor_set_interrupt.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 2 00:16:09.422 [2024-07-25 12:08:16.712301] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 2. 00:16:09.422 [2024-07-25 12:08:16.712397] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:09.680 12:08:16 -- interrupt/reactor_set_interrupt.sh@52 -- # '[' x '!=' x ']' 00:16:09.680 12:08:16 -- interrupt/reactor_set_interrupt.sh@59 -- # reactor_is_idle 1278186 2 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1278186 2 idle 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@34 -- # local idx=2 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@47 -- # grep reactor_2 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278228 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.53 reactor_2' 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@48 -- # echo 1278228 root 20 0 128.2g 34560 21312 S 0.0 0.0 0:00.53 reactor_2 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=0.0 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=0 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@53 -- # [[ 0 -gt 30 ]] 00:16:09.680 12:08:16 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:09.680 12:08:16 -- interrupt/reactor_set_interrupt.sh@62 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py --plugin interrupt_plugin reactor_set_interrupt_mode 0 00:16:09.940 [2024-07-25 12:08:17.069215] interrupt_tgt.c: 61:rpc_reactor_set_interrupt_mode: *NOTICE*: RPC Start to enable interrupt mode on reactor 0. 00:16:09.940 [2024-07-25 12:08:17.069392] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from poll mode. 00:16:09.940 [2024-07-25 12:08:17.069407] interrupt_tgt.c: 32:rpc_reactor_set_interrupt_mode_cb: *NOTICE*: complete reactor switch 00:16:09.940 12:08:17 -- interrupt/reactor_set_interrupt.sh@63 -- # '[' x '!=' x ']' 00:16:09.940 12:08:17 -- interrupt/reactor_set_interrupt.sh@70 -- # reactor_is_idle 1278186 0 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@74 -- # reactor_is_busy_or_idle 1278186 0 idle 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@33 -- # local pid=1278186 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@34 -- # local idx=0 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@35 -- # local state=idle 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \b\u\s\y ]] 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@37 -- # [[ idle != \i\d\l\e ]] 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@41 -- # hash top 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@46 -- # (( j = 10 )) 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@46 -- # (( j != 0 )) 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@47 -- # top -bHn 1 -p 1278186 -w 256 00:16:09.940 12:08:17 -- interrupt/interrupt_common.sh@47 -- # grep reactor_0 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@47 -- # top_reactor='1278186 root 20 0 128.2g 34560 21312 S 6.7 0.0 0:01.38 reactor_0' 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@48 -- # echo 1278186 root 20 0 128.2g 34560 21312 S 6.7 0.0 0:01.38 reactor_0 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@48 -- # sed -e 's/^\s*//g' 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@48 -- # awk '{print $9}' 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@48 -- # cpu_rate=6.7 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@49 -- # cpu_rate=6 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@51 -- # [[ idle = \b\u\s\y ]] 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@53 -- # [[ idle = \i\d\l\e ]] 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@53 -- # [[ 6 -gt 30 ]] 00:16:10.199 12:08:17 -- interrupt/interrupt_common.sh@56 -- # return 0 00:16:10.199 12:08:17 -- interrupt/reactor_set_interrupt.sh@72 -- # return 0 00:16:10.199 12:08:17 -- interrupt/reactor_set_interrupt.sh@82 -- # return 0 00:16:10.199 12:08:17 -- interrupt/reactor_set_interrupt.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:16:10.199 12:08:17 -- interrupt/reactor_set_interrupt.sh@104 -- # killprocess 1278186 00:16:10.199 12:08:17 -- common/autotest_common.sh@926 -- # '[' -z 1278186 ']' 00:16:10.199 12:08:17 -- common/autotest_common.sh@930 -- # kill -0 1278186 00:16:10.199 12:08:17 -- common/autotest_common.sh@931 -- # uname 00:16:10.199 12:08:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:10.199 12:08:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1278186 00:16:10.199 12:08:17 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:10.199 12:08:17 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:10.199 12:08:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1278186' 00:16:10.199 killing process with pid 1278186 00:16:10.199 12:08:17 -- common/autotest_common.sh@945 -- # kill 1278186 00:16:10.199 12:08:17 -- common/autotest_common.sh@950 -- # wait 1278186 00:16:10.459 12:08:17 -- interrupt/reactor_set_interrupt.sh@105 -- # cleanup 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@19 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:16:10.459 00:16:10.459 real 0m8.504s 00:16:10.459 user 0m7.403s 00:16:10.459 sys 0m1.917s 00:16:10.459 12:08:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:10.459 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 ************************************ 00:16:10.459 END TEST reactor_set_interrupt 00:16:10.459 ************************************ 00:16:10.459 12:08:17 -- spdk/autotest.sh@200 -- # run_test reap_unregistered_poller /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:10.459 12:08:17 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:10.459 12:08:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:10.459 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.459 ************************************ 00:16:10.459 START TEST reap_unregistered_poller 00:16:10.459 ************************************ 00:16:10.459 12:08:17 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:10.459 * Looking for test storage... 00:16:10.459 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.459 12:08:17 -- interrupt/reap_unregistered_poller.sh@9 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/interrupt_common.sh 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@5 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/reap_unregistered_poller.sh 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@5 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@5 -- # testdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/../.. 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@6 -- # rootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:10.459 12:08:17 -- interrupt/interrupt_common.sh@7 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh 00:16:10.459 12:08:17 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:16:10.459 12:08:17 -- common/autotest_common.sh@34 -- # set -e 00:16:10.459 12:08:17 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:16:10.459 12:08:17 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:16:10.459 12:08:17 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh ]] 00:16:10.459 12:08:17 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/build_config.sh 00:16:10.459 12:08:17 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:16:10.459 12:08:17 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=y 00:16:10.459 12:08:17 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:16:10.459 12:08:17 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:16:10.459 12:08:17 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:16:10.459 12:08:17 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:16:10.459 12:08:17 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:16:10.459 12:08:17 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:16:10.459 12:08:17 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:16:10.459 12:08:17 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:16:10.459 12:08:17 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:16:10.459 12:08:17 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:16:10.459 12:08:17 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:16:10.459 12:08:17 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:16:10.459 12:08:17 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:16:10.459 12:08:17 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:16:10.459 12:08:17 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:16:10.459 12:08:17 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:16:10.459 12:08:17 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:16:10.459 12:08:17 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:16:10.459 12:08:17 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=y 00:16:10.459 12:08:17 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:16:10.459 12:08:17 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:16:10.459 12:08:17 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:16:10.459 12:08:17 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:16:10.459 12:08:17 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:16:10.459 12:08:17 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:16:10.459 12:08:17 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:16:10.459 12:08:17 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:16:10.459 12:08:17 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:16:10.459 12:08:17 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:16:10.459 12:08:17 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:16:10.459 12:08:17 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:16:10.459 12:08:17 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=y 00:16:10.459 12:08:17 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:16:10.459 12:08:17 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:16:10.459 12:08:17 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:16:10.459 12:08:17 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:16:10.459 12:08:17 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:16:10.459 12:08:17 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:16:10.459 12:08:17 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:16:10.459 12:08:17 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:16:10.459 12:08:17 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:16:10.459 12:08:17 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:16:10.459 12:08:17 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:16:10.459 12:08:17 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:16:10.459 12:08:17 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:16:10.459 12:08:17 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:16:10.459 12:08:17 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:16:10.459 12:08:17 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:16:10.459 12:08:17 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:16:10.459 12:08:17 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:16:10.459 12:08:17 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:16:10.459 12:08:17 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:16:10.459 12:08:17 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:16:10.459 12:08:17 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR= 00:16:10.459 12:08:17 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:16:10.459 12:08:17 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:16:10.459 12:08:17 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:16:10.459 12:08:17 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:16:10.459 12:08:17 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:16:10.459 12:08:17 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:16:10.459 12:08:17 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:16:10.459 12:08:17 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:16:10.459 12:08:17 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:16:10.459 12:08:17 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:16:10.459 12:08:17 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:16:10.459 12:08:17 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=y 00:16:10.459 12:08:17 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:16:10.459 12:08:17 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=y 00:16:10.459 12:08:17 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:16:10.459 12:08:17 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=y 00:16:10.460 12:08:17 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:16:10.460 12:08:17 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:16:10.460 12:08:17 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:16:10.460 12:08:17 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/applications.sh 00:16:10.720 12:08:17 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:16:10.720 12:08:17 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/common 00:16:10.720 12:08:17 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:10.720 12:08:17 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:10.720 12:08:17 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/app 00:16:10.720 12:08:17 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:10.720 12:08:17 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:16:10.720 12:08:17 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:16:10.720 12:08:17 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:16:10.720 12:08:17 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:16:10.720 12:08:17 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:16:10.720 12:08:17 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:16:10.720 12:08:17 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/crypto-phy-autotest/spdk/include/spdk/config.h ]] 00:16:10.720 12:08:17 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:16:10.720 #define SPDK_CONFIG_H 00:16:10.720 #define SPDK_CONFIG_APPS 1 00:16:10.720 #define SPDK_CONFIG_ARCH native 00:16:10.720 #undef SPDK_CONFIG_ASAN 00:16:10.720 #undef SPDK_CONFIG_AVAHI 00:16:10.720 #undef SPDK_CONFIG_CET 00:16:10.720 #define SPDK_CONFIG_COVERAGE 1 00:16:10.720 #define SPDK_CONFIG_CROSS_PREFIX 00:16:10.720 #define SPDK_CONFIG_CRYPTO 1 00:16:10.720 #define SPDK_CONFIG_CRYPTO_MLX5 1 00:16:10.720 #undef SPDK_CONFIG_CUSTOMOCF 00:16:10.720 #undef SPDK_CONFIG_DAOS 00:16:10.720 #define SPDK_CONFIG_DAOS_DIR 00:16:10.720 #define SPDK_CONFIG_DEBUG 1 00:16:10.720 #define SPDK_CONFIG_DPDK_COMPRESSDEV 1 00:16:10.720 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build 00:16:10.720 #define SPDK_CONFIG_DPDK_INC_DIR 00:16:10.720 #define SPDK_CONFIG_DPDK_LIB_DIR 00:16:10.720 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:16:10.720 #define SPDK_CONFIG_ENV /var/jenkins/workspace/crypto-phy-autotest/spdk/lib/env_dpdk 00:16:10.720 #define SPDK_CONFIG_EXAMPLES 1 00:16:10.720 #undef SPDK_CONFIG_FC 00:16:10.720 #define SPDK_CONFIG_FC_PATH 00:16:10.720 #define SPDK_CONFIG_FIO_PLUGIN 1 00:16:10.720 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:16:10.720 #undef SPDK_CONFIG_FUSE 00:16:10.720 #undef SPDK_CONFIG_FUZZER 00:16:10.720 #define SPDK_CONFIG_FUZZER_LIB 00:16:10.720 #undef SPDK_CONFIG_GOLANG 00:16:10.720 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:16:10.720 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:16:10.720 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:16:10.720 #undef SPDK_CONFIG_HAVE_LIBBSD 00:16:10.720 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:16:10.720 #define SPDK_CONFIG_IDXD 1 00:16:10.720 #define SPDK_CONFIG_IDXD_KERNEL 1 00:16:10.720 #define SPDK_CONFIG_IPSEC_MB 1 00:16:10.720 #define SPDK_CONFIG_IPSEC_MB_DIR /var/jenkins/workspace/crypto-phy-autotest/spdk/intel-ipsec-mb/lib 00:16:10.720 #define SPDK_CONFIG_ISAL 1 00:16:10.720 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:16:10.720 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:16:10.720 #define SPDK_CONFIG_LIBDIR 00:16:10.720 #undef SPDK_CONFIG_LTO 00:16:10.720 #define SPDK_CONFIG_MAX_LCORES 00:16:10.720 #define SPDK_CONFIG_NVME_CUSE 1 00:16:10.720 #undef SPDK_CONFIG_OCF 00:16:10.720 #define SPDK_CONFIG_OCF_PATH 00:16:10.720 #define SPDK_CONFIG_OPENSSL_PATH 00:16:10.720 #undef SPDK_CONFIG_PGO_CAPTURE 00:16:10.720 #undef SPDK_CONFIG_PGO_USE 00:16:10.720 #define SPDK_CONFIG_PREFIX /usr/local 00:16:10.720 #undef SPDK_CONFIG_RAID5F 00:16:10.720 #undef SPDK_CONFIG_RBD 00:16:10.720 #define SPDK_CONFIG_RDMA 1 00:16:10.720 #define SPDK_CONFIG_RDMA_PROV verbs 00:16:10.720 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:16:10.720 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:16:10.720 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:16:10.720 #define SPDK_CONFIG_SHARED 1 00:16:10.720 #undef SPDK_CONFIG_SMA 00:16:10.720 #define SPDK_CONFIG_TESTS 1 00:16:10.720 #undef SPDK_CONFIG_TSAN 00:16:10.720 #define SPDK_CONFIG_UBLK 1 00:16:10.720 #define SPDK_CONFIG_UBSAN 1 00:16:10.720 #undef SPDK_CONFIG_UNIT_TESTS 00:16:10.720 #undef SPDK_CONFIG_URING 00:16:10.720 #define SPDK_CONFIG_URING_PATH 00:16:10.720 #undef SPDK_CONFIG_URING_ZNS 00:16:10.720 #undef SPDK_CONFIG_USDT 00:16:10.720 #define SPDK_CONFIG_VBDEV_COMPRESS 1 00:16:10.720 #define SPDK_CONFIG_VBDEV_COMPRESS_MLX5 1 00:16:10.720 #undef SPDK_CONFIG_VFIO_USER 00:16:10.720 #define SPDK_CONFIG_VFIO_USER_DIR 00:16:10.720 #define SPDK_CONFIG_VHOST 1 00:16:10.720 #define SPDK_CONFIG_VIRTIO 1 00:16:10.720 #undef SPDK_CONFIG_VTUNE 00:16:10.720 #define SPDK_CONFIG_VTUNE_DIR 00:16:10.720 #define SPDK_CONFIG_WERROR 1 00:16:10.720 #define SPDK_CONFIG_WPDK_DIR 00:16:10.720 #undef SPDK_CONFIG_XNVME 00:16:10.720 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:16:10.720 12:08:17 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:16:10.720 12:08:17 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:16:10.720 12:08:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:10.720 12:08:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:10.720 12:08:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:10.720 12:08:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.720 12:08:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.720 12:08:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.720 12:08:17 -- paths/export.sh@5 -- # export PATH 00:16:10.720 12:08:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:10.720 12:08:17 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:16:10.720 12:08:17 -- pm/common@6 -- # dirname /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/common 00:16:10.720 12:08:17 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:16:10.720 12:08:17 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm 00:16:10.720 12:08:17 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/perf/pm/../../../ 00:16:10.720 12:08:17 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/crypto-phy-autotest/spdk 00:16:10.720 12:08:17 -- pm/common@16 -- # TEST_TAG=N/A 00:16:10.720 12:08:17 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/crypto-phy-autotest/spdk/.run_test_name 00:16:10.720 12:08:17 -- common/autotest_common.sh@52 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:16:10.721 12:08:17 -- common/autotest_common.sh@56 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:16:10.721 12:08:17 -- common/autotest_common.sh@58 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:16:10.721 12:08:17 -- common/autotest_common.sh@60 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:16:10.721 12:08:17 -- common/autotest_common.sh@62 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:16:10.721 12:08:17 -- common/autotest_common.sh@64 -- # : 00:16:10.721 12:08:17 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:16:10.721 12:08:17 -- common/autotest_common.sh@66 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:16:10.721 12:08:17 -- common/autotest_common.sh@68 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:16:10.721 12:08:17 -- common/autotest_common.sh@70 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:16:10.721 12:08:17 -- common/autotest_common.sh@72 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:16:10.721 12:08:17 -- common/autotest_common.sh@74 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:16:10.721 12:08:17 -- common/autotest_common.sh@76 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:16:10.721 12:08:17 -- common/autotest_common.sh@78 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:16:10.721 12:08:17 -- common/autotest_common.sh@80 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:16:10.721 12:08:17 -- common/autotest_common.sh@82 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:16:10.721 12:08:17 -- common/autotest_common.sh@84 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:16:10.721 12:08:17 -- common/autotest_common.sh@86 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:16:10.721 12:08:17 -- common/autotest_common.sh@88 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:16:10.721 12:08:17 -- common/autotest_common.sh@90 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:16:10.721 12:08:17 -- common/autotest_common.sh@92 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:16:10.721 12:08:17 -- common/autotest_common.sh@94 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:16:10.721 12:08:17 -- common/autotest_common.sh@96 -- # : rdma 00:16:10.721 12:08:17 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:16:10.721 12:08:17 -- common/autotest_common.sh@98 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:16:10.721 12:08:17 -- common/autotest_common.sh@100 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:16:10.721 12:08:17 -- common/autotest_common.sh@102 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:16:10.721 12:08:17 -- common/autotest_common.sh@104 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:16:10.721 12:08:17 -- common/autotest_common.sh@106 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:16:10.721 12:08:17 -- common/autotest_common.sh@108 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:16:10.721 12:08:17 -- common/autotest_common.sh@110 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:16:10.721 12:08:17 -- common/autotest_common.sh@112 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:16:10.721 12:08:17 -- common/autotest_common.sh@114 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:16:10.721 12:08:17 -- common/autotest_common.sh@116 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:16:10.721 12:08:17 -- common/autotest_common.sh@118 -- # : 00:16:10.721 12:08:17 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:16:10.721 12:08:17 -- common/autotest_common.sh@120 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:16:10.721 12:08:17 -- common/autotest_common.sh@122 -- # : 1 00:16:10.721 12:08:17 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:16:10.721 12:08:17 -- common/autotest_common.sh@124 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:16:10.721 12:08:17 -- common/autotest_common.sh@126 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:16:10.721 12:08:17 -- common/autotest_common.sh@128 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:16:10.721 12:08:17 -- common/autotest_common.sh@130 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:16:10.721 12:08:17 -- common/autotest_common.sh@132 -- # : 00:16:10.721 12:08:17 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:16:10.721 12:08:17 -- common/autotest_common.sh@134 -- # : true 00:16:10.721 12:08:17 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:16:10.721 12:08:17 -- common/autotest_common.sh@136 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:16:10.721 12:08:17 -- common/autotest_common.sh@138 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:16:10.721 12:08:17 -- common/autotest_common.sh@140 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:16:10.721 12:08:17 -- common/autotest_common.sh@142 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:16:10.721 12:08:17 -- common/autotest_common.sh@144 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:16:10.721 12:08:17 -- common/autotest_common.sh@146 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:16:10.721 12:08:17 -- common/autotest_common.sh@148 -- # : 00:16:10.721 12:08:17 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:16:10.721 12:08:17 -- common/autotest_common.sh@150 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:16:10.721 12:08:17 -- common/autotest_common.sh@152 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:16:10.721 12:08:17 -- common/autotest_common.sh@154 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:16:10.721 12:08:17 -- common/autotest_common.sh@156 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:16:10.721 12:08:17 -- common/autotest_common.sh@158 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:16:10.721 12:08:17 -- common/autotest_common.sh@160 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:16:10.721 12:08:17 -- common/autotest_common.sh@163 -- # : 00:16:10.721 12:08:17 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:16:10.721 12:08:17 -- common/autotest_common.sh@165 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:16:10.721 12:08:17 -- common/autotest_common.sh@167 -- # : 0 00:16:10.721 12:08:17 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:16:10.721 12:08:17 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/crypto-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:16:10.721 12:08:17 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:16:10.721 12:08:17 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:16:10.722 12:08:17 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python 00:16:10.722 12:08:17 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:16:10.722 12:08:17 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:16:10.722 12:08:17 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:10.722 12:08:17 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:16:10.722 12:08:17 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:10.722 12:08:17 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:16:10.722 12:08:17 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:16:10.722 12:08:17 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:16:10.722 12:08:17 -- common/autotest_common.sh@196 -- # cat 00:16:10.722 12:08:17 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:16:10.722 12:08:17 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:10.722 12:08:17 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:16:10.722 12:08:17 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:10.722 12:08:17 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:16:10.722 12:08:17 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:16:10.722 12:08:17 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:16:10.722 12:08:17 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:10.722 12:08:17 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/bin 00:16:10.722 12:08:17 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:10.722 12:08:17 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples 00:16:10.722 12:08:17 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:10.722 12:08:17 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:16:10.722 12:08:17 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:10.722 12:08:17 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:16:10.722 12:08:17 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:10.722 12:08:17 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:16:10.722 12:08:17 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@248 -- # '[' 0 -eq 0 ']' 00:16:10.722 12:08:17 -- common/autotest_common.sh@249 -- # export valgrind= 00:16:10.722 12:08:17 -- common/autotest_common.sh@249 -- # valgrind= 00:16:10.722 12:08:17 -- common/autotest_common.sh@255 -- # uname -s 00:16:10.722 12:08:17 -- common/autotest_common.sh@255 -- # '[' Linux = Linux ']' 00:16:10.722 12:08:17 -- common/autotest_common.sh@256 -- # HUGEMEM=4096 00:16:10.722 12:08:17 -- common/autotest_common.sh@257 -- # export CLEAR_HUGE=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@257 -- # CLEAR_HUGE=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@258 -- # [[ 1 -eq 1 ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@262 -- # export HUGE_EVEN_ALLOC=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@262 -- # HUGE_EVEN_ALLOC=yes 00:16:10.722 12:08:17 -- common/autotest_common.sh@265 -- # MAKE=make 00:16:10.722 12:08:17 -- common/autotest_common.sh@266 -- # MAKEFLAGS=-j72 00:16:10.722 12:08:17 -- common/autotest_common.sh@282 -- # export HUGEMEM=4096 00:16:10.722 12:08:17 -- common/autotest_common.sh@282 -- # HUGEMEM=4096 00:16:10.722 12:08:17 -- common/autotest_common.sh@284 -- # '[' -z /var/jenkins/workspace/crypto-phy-autotest/spdk/../output ']' 00:16:10.722 12:08:17 -- common/autotest_common.sh@289 -- # NO_HUGE=() 00:16:10.722 12:08:17 -- common/autotest_common.sh@290 -- # TEST_MODE= 00:16:10.722 12:08:17 -- common/autotest_common.sh@309 -- # [[ -z 1278831 ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@309 -- # kill -0 1278831 00:16:10.722 12:08:17 -- common/autotest_common.sh@1665 -- # set_test_storage 2147483648 00:16:10.722 12:08:17 -- common/autotest_common.sh@319 -- # [[ -v testdir ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@321 -- # local requested_size=2147483648 00:16:10.722 12:08:17 -- common/autotest_common.sh@322 -- # local mount target_dir 00:16:10.722 12:08:17 -- common/autotest_common.sh@324 -- # local -A mounts fss sizes avails uses 00:16:10.722 12:08:17 -- common/autotest_common.sh@325 -- # local source fs size avail mount use 00:16:10.722 12:08:17 -- common/autotest_common.sh@327 -- # local storage_fallback storage_candidates 00:16:10.722 12:08:17 -- common/autotest_common.sh@329 -- # mktemp -udt spdk.XXXXXX 00:16:10.722 12:08:17 -- common/autotest_common.sh@329 -- # storage_fallback=/tmp/spdk.LLXRXN 00:16:10.722 12:08:17 -- common/autotest_common.sh@334 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:16:10.722 12:08:17 -- common/autotest_common.sh@336 -- # [[ -n '' ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@341 -- # [[ -n '' ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@346 -- # mkdir -p /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt /tmp/spdk.LLXRXN/tests/interrupt /tmp/spdk.LLXRXN 00:16:10.722 12:08:17 -- common/autotest_common.sh@349 -- # requested_size=2214592512 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@318 -- # df -T 00:16:10.722 12:08:17 -- common/autotest_common.sh@318 -- # grep -v Filesystem 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_devtmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=devtmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=67108864 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=67108864 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=0 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=/dev/pmem0 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=ext2 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=955527168 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=5284429824 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4328902656 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=spdk_root 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=overlay 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=83648696320 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=94508597248 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=10859900928 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=47251705856 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254298624 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=2592768 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=18892201984 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=18901721088 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=9519104 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=47253647360 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=47254298624 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=651264 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # mounts["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@352 -- # fss["$mount"]=tmpfs 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # avails["$mount"]=9450852352 00:16:10.722 12:08:17 -- common/autotest_common.sh@353 -- # sizes["$mount"]=9450856448 00:16:10.722 12:08:17 -- common/autotest_common.sh@354 -- # uses["$mount"]=4096 00:16:10.722 12:08:17 -- common/autotest_common.sh@351 -- # read -r source fs size use avail _ mount 00:16:10.722 12:08:17 -- common/autotest_common.sh@357 -- # printf '* Looking for test storage...\n' 00:16:10.722 * Looking for test storage... 00:16:10.722 12:08:17 -- common/autotest_common.sh@359 -- # local target_space new_size 00:16:10.722 12:08:17 -- common/autotest_common.sh@360 -- # for target_dir in "${storage_candidates[@]}" 00:16:10.722 12:08:17 -- common/autotest_common.sh@363 -- # df /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.722 12:08:17 -- common/autotest_common.sh@363 -- # awk '$1 !~ /Filesystem/{print $6}' 00:16:10.722 12:08:17 -- common/autotest_common.sh@363 -- # mount=/ 00:16:10.722 12:08:17 -- common/autotest_common.sh@365 -- # target_space=83648696320 00:16:10.722 12:08:17 -- common/autotest_common.sh@366 -- # (( target_space == 0 || target_space < requested_size )) 00:16:10.722 12:08:17 -- common/autotest_common.sh@369 -- # (( target_space >= requested_size )) 00:16:10.722 12:08:17 -- common/autotest_common.sh@371 -- # [[ overlay == tmpfs ]] 00:16:10.722 12:08:17 -- common/autotest_common.sh@371 -- # [[ overlay == ramfs ]] 00:16:10.723 12:08:17 -- common/autotest_common.sh@371 -- # [[ / == / ]] 00:16:10.723 12:08:17 -- common/autotest_common.sh@372 -- # new_size=13074493440 00:16:10.723 12:08:17 -- common/autotest_common.sh@373 -- # (( new_size * 100 / sizes[/] > 95 )) 00:16:10.723 12:08:17 -- common/autotest_common.sh@378 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.723 12:08:17 -- common/autotest_common.sh@378 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.723 12:08:17 -- common/autotest_common.sh@379 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.723 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt 00:16:10.723 12:08:17 -- common/autotest_common.sh@380 -- # return 0 00:16:10.723 12:08:17 -- common/autotest_common.sh@1667 -- # set -o errtrace 00:16:10.723 12:08:17 -- common/autotest_common.sh@1668 -- # shopt -s extdebug 00:16:10.723 12:08:17 -- common/autotest_common.sh@1669 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:16:10.723 12:08:17 -- common/autotest_common.sh@1671 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:16:10.723 12:08:17 -- common/autotest_common.sh@1672 -- # true 00:16:10.723 12:08:17 -- common/autotest_common.sh@1674 -- # xtrace_fd 00:16:10.723 12:08:17 -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:16:10.723 12:08:17 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:16:10.723 12:08:17 -- common/autotest_common.sh@27 -- # exec 00:16:10.723 12:08:17 -- common/autotest_common.sh@29 -- # exec 00:16:10.723 12:08:17 -- common/autotest_common.sh@31 -- # xtrace_restore 00:16:10.723 12:08:17 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:16:10.723 12:08:17 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:16:10.723 12:08:17 -- common/autotest_common.sh@18 -- # set -x 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@9 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@11 -- # r0_mask=0x1 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@12 -- # r1_mask=0x2 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@13 -- # r2_mask=0x4 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@15 -- # cpu_server_mask=0x07 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@16 -- # rpc_server_addr=/var/tmp/spdk.sock 00:16:10.723 12:08:17 -- interrupt/reap_unregistered_poller.sh@14 -- # export PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:16:10.723 12:08:17 -- interrupt/reap_unregistered_poller.sh@14 -- # PYTHONPATH=:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/crypto-phy-autotest/spdk/python:/var/jenkins/workspace/crypto-phy-autotest/spdk/examples/interrupt_tgt 00:16:10.723 12:08:17 -- interrupt/reap_unregistered_poller.sh@17 -- # start_intr_tgt 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@23 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@24 -- # local cpu_mask=0x07 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@27 -- # intr_tgt_pid=1278976 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@28 -- # trap 'killprocess "$intr_tgt_pid"; cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@26 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/interrupt_tgt -m 0x07 -r /var/tmp/spdk.sock -E -g 00:16:10.723 12:08:17 -- interrupt/interrupt_common.sh@29 -- # waitforlisten 1278976 /var/tmp/spdk.sock 00:16:10.723 12:08:17 -- common/autotest_common.sh@819 -- # '[' -z 1278976 ']' 00:16:10.723 12:08:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:10.723 12:08:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:10.723 12:08:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:10.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:10.723 12:08:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:10.723 12:08:17 -- common/autotest_common.sh@10 -- # set +x 00:16:10.723 [2024-07-25 12:08:17.949620] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:10.723 [2024-07-25 12:08:17.949676] [ DPDK EAL parameters: interrupt_tgt --no-shconf -c 0x07 --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1278976 ] 00:16:10.982 [2024-07-25 12:08:18.036902] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:10.982 [2024-07-25 12:08:18.121504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:10.982 [2024-07-25 12:08:18.121592] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:10.982 [2024-07-25 12:08:18.121595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.982 [2024-07-25 12:08:18.191964] thread.c:2085:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:16:11.548 12:08:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:11.548 12:08:18 -- common/autotest_common.sh@852 -- # return 0 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@20 -- # rpc_cmd thread_get_pollers 00:16:11.548 12:08:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:11.548 12:08:18 -- common/autotest_common.sh@10 -- # set +x 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@20 -- # jq -r '.threads[0]' 00:16:11.548 12:08:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@20 -- # app_thread='{ 00:16:11.548 "name": "app_thread", 00:16:11.548 "id": 1, 00:16:11.548 "active_pollers": [], 00:16:11.548 "timed_pollers": [ 00:16:11.548 { 00:16:11.548 "name": "rpc_subsystem_poll", 00:16:11.548 "id": 1, 00:16:11.548 "state": "waiting", 00:16:11.548 "run_count": 0, 00:16:11.548 "busy_count": 0, 00:16:11.548 "period_ticks": 9200000 00:16:11.548 } 00:16:11.548 ], 00:16:11.548 "paused_pollers": [] 00:16:11.548 }' 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@21 -- # jq -r '.active_pollers[].name' 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@21 -- # native_pollers= 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@22 -- # native_pollers+=' ' 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@23 -- # jq -r '.timed_pollers[].name' 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@23 -- # native_pollers+=rpc_subsystem_poll 00:16:11.548 12:08:18 -- interrupt/reap_unregistered_poller.sh@28 -- # setup_bdev_aio 00:16:11.548 12:08:18 -- interrupt/interrupt_common.sh@98 -- # uname -s 00:16:11.548 12:08:18 -- interrupt/interrupt_common.sh@98 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:16:11.548 12:08:18 -- interrupt/interrupt_common.sh@99 -- # dd if=/dev/zero of=/var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile bs=2048 count=5000 00:16:11.806 5000+0 records in 00:16:11.806 5000+0 records out 00:16:11.806 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0260977 s, 392 MB/s 00:16:11.806 12:08:18 -- interrupt/interrupt_common.sh@100 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile AIO0 2048 00:16:11.806 AIO0 00:16:11.806 12:08:19 -- interrupt/reap_unregistered_poller.sh@33 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:12.064 12:08:19 -- interrupt/reap_unregistered_poller.sh@34 -- # sleep 0.1 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@37 -- # rpc_cmd thread_get_pollers 00:16:12.322 12:08:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@37 -- # jq -r '.threads[0]' 00:16:12.322 12:08:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.322 12:08:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@37 -- # app_thread='{ 00:16:12.322 "name": "app_thread", 00:16:12.322 "id": 1, 00:16:12.322 "active_pollers": [], 00:16:12.322 "timed_pollers": [ 00:16:12.322 { 00:16:12.322 "name": "rpc_subsystem_poll", 00:16:12.322 "id": 1, 00:16:12.322 "state": "waiting", 00:16:12.322 "run_count": 0, 00:16:12.322 "busy_count": 0, 00:16:12.322 "period_ticks": 9200000 00:16:12.322 } 00:16:12.322 ], 00:16:12.322 "paused_pollers": [] 00:16:12.322 }' 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@38 -- # jq -r '.active_pollers[].name' 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@38 -- # remaining_pollers= 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@39 -- # remaining_pollers+=' ' 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@40 -- # jq -r '.timed_pollers[].name' 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@40 -- # remaining_pollers+=rpc_subsystem_poll 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@44 -- # [[ rpc_subsystem_poll == \ \r\p\c\_\s\u\b\s\y\s\t\e\m\_\p\o\l\l ]] 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:12.322 12:08:19 -- interrupt/reap_unregistered_poller.sh@47 -- # killprocess 1278976 00:16:12.322 12:08:19 -- common/autotest_common.sh@926 -- # '[' -z 1278976 ']' 00:16:12.322 12:08:19 -- common/autotest_common.sh@930 -- # kill -0 1278976 00:16:12.322 12:08:19 -- common/autotest_common.sh@931 -- # uname 00:16:12.322 12:08:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:12.322 12:08:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1278976 00:16:12.322 12:08:19 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:12.322 12:08:19 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:12.322 12:08:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1278976' 00:16:12.322 killing process with pid 1278976 00:16:12.322 12:08:19 -- common/autotest_common.sh@945 -- # kill 1278976 00:16:12.322 12:08:19 -- common/autotest_common.sh@950 -- # wait 1278976 00:16:12.579 12:08:19 -- interrupt/reap_unregistered_poller.sh@48 -- # cleanup 00:16:12.579 12:08:19 -- interrupt/interrupt_common.sh@19 -- # rm -f /var/jenkins/workspace/crypto-phy-autotest/spdk/test/interrupt/aiofile 00:16:12.579 00:16:12.579 real 0m2.145s 00:16:12.579 user 0m1.215s 00:16:12.579 sys 0m0.604s 00:16:12.579 12:08:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:12.579 12:08:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.579 ************************************ 00:16:12.579 END TEST reap_unregistered_poller 00:16:12.579 ************************************ 00:16:12.579 12:08:19 -- spdk/autotest.sh@204 -- # uname -s 00:16:12.579 12:08:19 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:16:12.579 12:08:19 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:16:12.579 12:08:19 -- spdk/autotest.sh@211 -- # [[ 1 -eq 0 ]] 00:16:12.579 12:08:19 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@268 -- # timing_exit lib 00:16:12.579 12:08:19 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:12.579 12:08:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.579 12:08:19 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@287 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@350 -- # '[' 1 -eq 1 ']' 00:16:12.579 12:08:19 -- spdk/autotest.sh@351 -- # run_test compress_compdev /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:16:12.579 12:08:19 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:12.579 12:08:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:12.579 12:08:19 -- common/autotest_common.sh@10 -- # set +x 00:16:12.579 ************************************ 00:16:12.579 START TEST compress_compdev 00:16:12.579 ************************************ 00:16:12.579 12:08:19 -- common/autotest_common.sh@1104 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh compdev 00:16:12.837 * Looking for test storage... 00:16:12.837 * Found test storage at /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress 00:16:12.837 12:08:19 -- compress/compress.sh@13 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh 00:16:12.837 12:08:19 -- nvmf/common.sh@7 -- # uname -s 00:16:12.837 12:08:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:12.837 12:08:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:12.837 12:08:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:12.837 12:08:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:12.837 12:08:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:12.837 12:08:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:12.837 12:08:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:12.837 12:08:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:12.837 12:08:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:12.837 12:08:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:12.838 12:08:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00d40ca9-2a78-e711-906e-0017a4403562 00:16:12.838 12:08:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=00d40ca9-2a78-e711-906e-0017a4403562 00:16:12.838 12:08:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:12.838 12:08:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:12.838 12:08:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:12.838 12:08:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/common.sh 00:16:12.838 12:08:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:12.838 12:08:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:12.838 12:08:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:12.838 12:08:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.838 12:08:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.838 12:08:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.838 12:08:19 -- paths/export.sh@5 -- # export PATH 00:16:12.838 12:08:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:12.838 12:08:19 -- nvmf/common.sh@46 -- # : 0 00:16:12.838 12:08:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:12.838 12:08:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:12.838 12:08:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:12.838 12:08:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:12.838 12:08:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:12.838 12:08:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:12.838 12:08:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:12.838 12:08:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:12.838 12:08:19 -- compress/compress.sh@17 -- # rpc_py=/var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py 00:16:12.838 12:08:19 -- compress/compress.sh@81 -- # mkdir -p /tmp/pmem 00:16:12.838 12:08:20 -- compress/compress.sh@82 -- # test_type=compdev 00:16:12.838 12:08:20 -- compress/compress.sh@86 -- # run_bdevperf 32 4096 3 00:16:12.838 12:08:20 -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:16:12.838 12:08:20 -- compress/compress.sh@71 -- # bdevperf_pid=1279293 00:16:12.838 12:08:20 -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:16:12.838 12:08:20 -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:12.838 12:08:20 -- compress/compress.sh@73 -- # waitforlisten 1279293 00:16:12.838 12:08:20 -- common/autotest_common.sh@819 -- # '[' -z 1279293 ']' 00:16:12.838 12:08:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:12.838 12:08:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:12.838 12:08:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:12.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:12.838 12:08:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:12.838 12:08:20 -- common/autotest_common.sh@10 -- # set +x 00:16:12.838 [2024-07-25 12:08:20.045205] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:12.838 [2024-07-25 12:08:20.045283] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1279293 ] 00:16:12.838 [2024-07-25 12:08:20.128888] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:13.097 [2024-07-25 12:08:20.212893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:13.097 [2024-07-25 12:08:20.212896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:13.663 [2024-07-25 12:08:20.753695] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:16:13.663 12:08:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:13.663 12:08:20 -- common/autotest_common.sh@852 -- # return 0 00:16:13.663 12:08:20 -- compress/compress.sh@74 -- # create_vols 00:16:13.663 12:08:20 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:13.663 12:08:20 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:14.229 [2024-07-25 12:08:21.334506] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x13870a0 PMD being used: compress_qat 00:16:14.229 12:08:21 -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:16:14.229 12:08:21 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme0n1 00:16:14.229 12:08:21 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:14.229 12:08:21 -- common/autotest_common.sh@889 -- # local i 00:16:14.229 12:08:21 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:14.229 12:08:21 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:14.229 12:08:21 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:14.229 12:08:21 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:16:14.519 [ 00:16:14.519 { 00:16:14.519 "name": "Nvme0n1", 00:16:14.519 "aliases": [ 00:16:14.519 "01000000-0000-0000-5cd2-e42bec7b5351" 00:16:14.519 ], 00:16:14.519 "product_name": "NVMe disk", 00:16:14.519 "block_size": 512, 00:16:14.519 "num_blocks": 7501476528, 00:16:14.519 "uuid": "01000000-0000-0000-5cd2-e42bec7b5351", 00:16:14.519 "assigned_rate_limits": { 00:16:14.519 "rw_ios_per_sec": 0, 00:16:14.519 "rw_mbytes_per_sec": 0, 00:16:14.519 "r_mbytes_per_sec": 0, 00:16:14.519 "w_mbytes_per_sec": 0 00:16:14.519 }, 00:16:14.519 "claimed": false, 00:16:14.519 "zoned": false, 00:16:14.519 "supported_io_types": { 00:16:14.519 "read": true, 00:16:14.519 "write": true, 00:16:14.519 "unmap": true, 00:16:14.519 "write_zeroes": true, 00:16:14.519 "flush": true, 00:16:14.519 "reset": true, 00:16:14.519 "compare": false, 00:16:14.519 "compare_and_write": false, 00:16:14.519 "abort": true, 00:16:14.519 "nvme_admin": true, 00:16:14.519 "nvme_io": true 00:16:14.519 }, 00:16:14.519 "driver_specific": { 00:16:14.519 "nvme": [ 00:16:14.519 { 00:16:14.519 "pci_address": "0000:5e:00.0", 00:16:14.519 "trid": { 00:16:14.519 "trtype": "PCIe", 00:16:14.519 "traddr": "0000:5e:00.0" 00:16:14.519 }, 00:16:14.519 "ctrlr_data": { 00:16:14.519 "cntlid": 0, 00:16:14.519 "vendor_id": "0x8086", 00:16:14.519 "model_number": "INTEL SSDPF2KX038T1", 00:16:14.519 "serial_number": "PHAX137100D13P8CGN", 00:16:14.519 "firmware_revision": "9CV10015", 00:16:14.519 "subnqn": "nqn.2021-09.com.intel:PHAX137100D13P8CGN ", 00:16:14.519 "oacs": { 00:16:14.519 "security": 0, 00:16:14.519 "format": 1, 00:16:14.519 "firmware": 1, 00:16:14.519 "ns_manage": 1 00:16:14.519 }, 00:16:14.519 "multi_ctrlr": false, 00:16:14.519 "ana_reporting": false 00:16:14.519 }, 00:16:14.519 "vs": { 00:16:14.519 "nvme_version": "1.4" 00:16:14.519 }, 00:16:14.519 "ns_data": { 00:16:14.519 "id": 1, 00:16:14.519 "can_share": false 00:16:14.519 } 00:16:14.519 } 00:16:14.519 ], 00:16:14.519 "mp_policy": "active_passive" 00:16:14.519 } 00:16:14.519 } 00:16:14.519 ] 00:16:14.519 12:08:21 -- common/autotest_common.sh@895 -- # return 0 00:16:14.519 12:08:21 -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:16:14.777 [2024-07-25 12:08:21.846765] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1387df0 PMD being used: compress_qat 00:16:14.777 f58130e4-2bfd-44d3-987e-54edcc7afd4c 00:16:14.777 12:08:21 -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:16:14.777 669f46dc-42be-4bcd-a51e-9575e1150bbb 00:16:14.777 12:08:22 -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:16:14.777 12:08:22 -- common/autotest_common.sh@887 -- # local bdev_name=lvs0/lv0 00:16:14.777 12:08:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:14.777 12:08:22 -- common/autotest_common.sh@889 -- # local i 00:16:14.777 12:08:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:14.777 12:08:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:14.777 12:08:22 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:15.036 12:08:22 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:16:15.311 [ 00:16:15.311 { 00:16:15.311 "name": "669f46dc-42be-4bcd-a51e-9575e1150bbb", 00:16:15.311 "aliases": [ 00:16:15.311 "lvs0/lv0" 00:16:15.311 ], 00:16:15.311 "product_name": "Logical Volume", 00:16:15.311 "block_size": 512, 00:16:15.311 "num_blocks": 204800, 00:16:15.311 "uuid": "669f46dc-42be-4bcd-a51e-9575e1150bbb", 00:16:15.311 "assigned_rate_limits": { 00:16:15.311 "rw_ios_per_sec": 0, 00:16:15.311 "rw_mbytes_per_sec": 0, 00:16:15.311 "r_mbytes_per_sec": 0, 00:16:15.311 "w_mbytes_per_sec": 0 00:16:15.311 }, 00:16:15.311 "claimed": false, 00:16:15.311 "zoned": false, 00:16:15.311 "supported_io_types": { 00:16:15.311 "read": true, 00:16:15.311 "write": true, 00:16:15.311 "unmap": true, 00:16:15.311 "write_zeroes": true, 00:16:15.311 "flush": false, 00:16:15.311 "reset": true, 00:16:15.311 "compare": false, 00:16:15.311 "compare_and_write": false, 00:16:15.311 "abort": false, 00:16:15.311 "nvme_admin": false, 00:16:15.311 "nvme_io": false 00:16:15.311 }, 00:16:15.311 "driver_specific": { 00:16:15.311 "lvol": { 00:16:15.311 "lvol_store_uuid": "f58130e4-2bfd-44d3-987e-54edcc7afd4c", 00:16:15.311 "base_bdev": "Nvme0n1", 00:16:15.311 "thin_provision": true, 00:16:15.311 "snapshot": false, 00:16:15.311 "clone": false, 00:16:15.311 "esnap_clone": false 00:16:15.311 } 00:16:15.311 } 00:16:15.311 } 00:16:15.311 ] 00:16:15.311 12:08:22 -- common/autotest_common.sh@895 -- # return 0 00:16:15.311 12:08:22 -- compress/compress.sh@41 -- # '[' -z '' ']' 00:16:15.311 12:08:22 -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:16:15.311 [2024-07-25 12:08:22.557590] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:16:15.311 COMP_lvs0/lv0 00:16:15.311 12:08:22 -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:16:15.311 12:08:22 -- common/autotest_common.sh@887 -- # local bdev_name=COMP_lvs0/lv0 00:16:15.311 12:08:22 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:15.311 12:08:22 -- common/autotest_common.sh@889 -- # local i 00:16:15.311 12:08:22 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:15.311 12:08:22 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:15.311 12:08:22 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:15.582 12:08:22 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:16:15.841 [ 00:16:15.841 { 00:16:15.841 "name": "COMP_lvs0/lv0", 00:16:15.841 "aliases": [ 00:16:15.841 "43270fcc-f757-5541-9c53-bbfcad8b4cd9" 00:16:15.841 ], 00:16:15.841 "product_name": "compress", 00:16:15.841 "block_size": 512, 00:16:15.841 "num_blocks": 200704, 00:16:15.841 "uuid": "43270fcc-f757-5541-9c53-bbfcad8b4cd9", 00:16:15.841 "assigned_rate_limits": { 00:16:15.841 "rw_ios_per_sec": 0, 00:16:15.841 "rw_mbytes_per_sec": 0, 00:16:15.841 "r_mbytes_per_sec": 0, 00:16:15.841 "w_mbytes_per_sec": 0 00:16:15.841 }, 00:16:15.841 "claimed": false, 00:16:15.841 "zoned": false, 00:16:15.841 "supported_io_types": { 00:16:15.841 "read": true, 00:16:15.841 "write": true, 00:16:15.841 "unmap": false, 00:16:15.841 "write_zeroes": true, 00:16:15.841 "flush": false, 00:16:15.841 "reset": false, 00:16:15.841 "compare": false, 00:16:15.841 "compare_and_write": false, 00:16:15.841 "abort": false, 00:16:15.841 "nvme_admin": false, 00:16:15.841 "nvme_io": false 00:16:15.841 }, 00:16:15.841 "driver_specific": { 00:16:15.841 "compress": { 00:16:15.841 "name": "COMP_lvs0/lv0", 00:16:15.841 "base_bdev_name": "669f46dc-42be-4bcd-a51e-9575e1150bbb" 00:16:15.841 } 00:16:15.841 } 00:16:15.841 } 00:16:15.841 ] 00:16:15.841 12:08:22 -- common/autotest_common.sh@895 -- # return 0 00:16:15.841 12:08:22 -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:15.841 [2024-07-25 12:08:23.027808] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f90ec1ad740 PMD being used: compress_qat 00:16:15.841 [2024-07-25 12:08:23.029550] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1384660 PMD being used: compress_qat 00:16:15.841 Running I/O for 3 seconds... 00:16:19.123 00:16:19.123 Latency(us) 00:16:19.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.123 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:16:19.123 Verification LBA range: start 0x0 length 0x3100 00:16:19.123 COMP_lvs0/lv0 : 3.00 8498.86 33.20 0.00 0.00 3746.14 54.09 6325.65 00:16:19.123 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:16:19.123 Verification LBA range: start 0x3100 length 0x3100 00:16:19.123 COMP_lvs0/lv0 : 3.00 9125.50 35.65 0.00 0.00 3489.72 70.79 6040.71 00:16:19.123 =================================================================================================================== 00:16:19.123 Total : 17624.36 68.85 0.00 0.00 3613.37 54.09 6325.65 00:16:19.123 0 00:16:19.123 12:08:26 -- compress/compress.sh@76 -- # destroy_vols 00:16:19.123 12:08:26 -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:16:19.123 12:08:26 -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:16:19.123 12:08:26 -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:19.123 12:08:26 -- compress/compress.sh@78 -- # killprocess 1279293 00:16:19.123 12:08:26 -- common/autotest_common.sh@926 -- # '[' -z 1279293 ']' 00:16:19.123 12:08:26 -- common/autotest_common.sh@930 -- # kill -0 1279293 00:16:19.123 12:08:26 -- common/autotest_common.sh@931 -- # uname 00:16:19.123 12:08:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:19.123 12:08:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1279293 00:16:19.381 12:08:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:19.381 12:08:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:19.381 12:08:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1279293' 00:16:19.381 killing process with pid 1279293 00:16:19.381 12:08:26 -- common/autotest_common.sh@945 -- # kill 1279293 00:16:19.381 Received shutdown signal, test time was about 3.000000 seconds 00:16:19.381 00:16:19.381 Latency(us) 00:16:19.381 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:19.381 =================================================================================================================== 00:16:19.381 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:19.381 12:08:26 -- common/autotest_common.sh@950 -- # wait 1279293 00:16:21.281 12:08:28 -- compress/compress.sh@87 -- # run_bdevperf 32 4096 3 512 00:16:21.281 12:08:28 -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:16:21.281 12:08:28 -- compress/compress.sh@71 -- # bdevperf_pid=1280400 00:16:21.281 12:08:28 -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:21.281 12:08:28 -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:16:21.281 12:08:28 -- compress/compress.sh@73 -- # waitforlisten 1280400 00:16:21.281 12:08:28 -- common/autotest_common.sh@819 -- # '[' -z 1280400 ']' 00:16:21.281 12:08:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:21.281 12:08:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:21.281 12:08:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:21.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:21.281 12:08:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:21.281 12:08:28 -- common/autotest_common.sh@10 -- # set +x 00:16:21.281 [2024-07-25 12:08:28.137436] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:21.281 [2024-07-25 12:08:28.137485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1280400 ] 00:16:21.281 [2024-07-25 12:08:28.224397] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:21.281 [2024-07-25 12:08:28.312724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:21.281 [2024-07-25 12:08:28.312727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:21.848 [2024-07-25 12:08:28.863200] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:16:21.848 12:08:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:21.848 12:08:28 -- common/autotest_common.sh@852 -- # return 0 00:16:21.848 12:08:28 -- compress/compress.sh@74 -- # create_vols 512 00:16:21.848 12:08:28 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:21.848 12:08:28 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:22.414 [2024-07-25 12:08:29.422244] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25830a0 PMD being used: compress_qat 00:16:22.414 12:08:29 -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:16:22.414 12:08:29 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme0n1 00:16:22.414 12:08:29 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.414 12:08:29 -- common/autotest_common.sh@889 -- # local i 00:16:22.414 12:08:29 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.414 12:08:29 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.414 12:08:29 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:22.414 12:08:29 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:16:22.673 [ 00:16:22.673 { 00:16:22.673 "name": "Nvme0n1", 00:16:22.673 "aliases": [ 00:16:22.673 "01000000-0000-0000-5cd2-e42bec7b5351" 00:16:22.673 ], 00:16:22.673 "product_name": "NVMe disk", 00:16:22.673 "block_size": 512, 00:16:22.673 "num_blocks": 7501476528, 00:16:22.673 "uuid": "01000000-0000-0000-5cd2-e42bec7b5351", 00:16:22.673 "assigned_rate_limits": { 00:16:22.673 "rw_ios_per_sec": 0, 00:16:22.673 "rw_mbytes_per_sec": 0, 00:16:22.673 "r_mbytes_per_sec": 0, 00:16:22.673 "w_mbytes_per_sec": 0 00:16:22.673 }, 00:16:22.673 "claimed": false, 00:16:22.673 "zoned": false, 00:16:22.673 "supported_io_types": { 00:16:22.673 "read": true, 00:16:22.673 "write": true, 00:16:22.673 "unmap": true, 00:16:22.673 "write_zeroes": true, 00:16:22.673 "flush": true, 00:16:22.673 "reset": true, 00:16:22.673 "compare": false, 00:16:22.673 "compare_and_write": false, 00:16:22.673 "abort": true, 00:16:22.673 "nvme_admin": true, 00:16:22.673 "nvme_io": true 00:16:22.673 }, 00:16:22.673 "driver_specific": { 00:16:22.673 "nvme": [ 00:16:22.673 { 00:16:22.673 "pci_address": "0000:5e:00.0", 00:16:22.673 "trid": { 00:16:22.673 "trtype": "PCIe", 00:16:22.673 "traddr": "0000:5e:00.0" 00:16:22.673 }, 00:16:22.673 "ctrlr_data": { 00:16:22.673 "cntlid": 0, 00:16:22.673 "vendor_id": "0x8086", 00:16:22.673 "model_number": "INTEL SSDPF2KX038T1", 00:16:22.673 "serial_number": "PHAX137100D13P8CGN", 00:16:22.673 "firmware_revision": "9CV10015", 00:16:22.673 "subnqn": "nqn.2021-09.com.intel:PHAX137100D13P8CGN ", 00:16:22.673 "oacs": { 00:16:22.673 "security": 0, 00:16:22.673 "format": 1, 00:16:22.673 "firmware": 1, 00:16:22.673 "ns_manage": 1 00:16:22.673 }, 00:16:22.673 "multi_ctrlr": false, 00:16:22.673 "ana_reporting": false 00:16:22.673 }, 00:16:22.673 "vs": { 00:16:22.673 "nvme_version": "1.4" 00:16:22.673 }, 00:16:22.673 "ns_data": { 00:16:22.673 "id": 1, 00:16:22.673 "can_share": false 00:16:22.673 } 00:16:22.673 } 00:16:22.673 ], 00:16:22.673 "mp_policy": "active_passive" 00:16:22.673 } 00:16:22.673 } 00:16:22.673 ] 00:16:22.673 12:08:29 -- common/autotest_common.sh@895 -- # return 0 00:16:22.673 12:08:29 -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:16:22.673 [2024-07-25 12:08:29.938469] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25839e0 PMD being used: compress_qat 00:16:22.673 f25cd0ed-8668-476f-a3ec-a486c73ad967 00:16:22.673 12:08:29 -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:16:22.932 00168b2a-2a76-4bbb-85be-1041a6abdff3 00:16:22.932 12:08:30 -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:16:22.932 12:08:30 -- common/autotest_common.sh@887 -- # local bdev_name=lvs0/lv0 00:16:22.932 12:08:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:22.932 12:08:30 -- common/autotest_common.sh@889 -- # local i 00:16:22.932 12:08:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:22.932 12:08:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:22.932 12:08:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:23.191 12:08:30 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:16:23.191 [ 00:16:23.191 { 00:16:23.191 "name": "00168b2a-2a76-4bbb-85be-1041a6abdff3", 00:16:23.191 "aliases": [ 00:16:23.191 "lvs0/lv0" 00:16:23.191 ], 00:16:23.191 "product_name": "Logical Volume", 00:16:23.191 "block_size": 512, 00:16:23.191 "num_blocks": 204800, 00:16:23.191 "uuid": "00168b2a-2a76-4bbb-85be-1041a6abdff3", 00:16:23.191 "assigned_rate_limits": { 00:16:23.191 "rw_ios_per_sec": 0, 00:16:23.191 "rw_mbytes_per_sec": 0, 00:16:23.191 "r_mbytes_per_sec": 0, 00:16:23.191 "w_mbytes_per_sec": 0 00:16:23.191 }, 00:16:23.191 "claimed": false, 00:16:23.191 "zoned": false, 00:16:23.191 "supported_io_types": { 00:16:23.191 "read": true, 00:16:23.191 "write": true, 00:16:23.191 "unmap": true, 00:16:23.191 "write_zeroes": true, 00:16:23.191 "flush": false, 00:16:23.191 "reset": true, 00:16:23.191 "compare": false, 00:16:23.191 "compare_and_write": false, 00:16:23.191 "abort": false, 00:16:23.191 "nvme_admin": false, 00:16:23.191 "nvme_io": false 00:16:23.191 }, 00:16:23.191 "driver_specific": { 00:16:23.191 "lvol": { 00:16:23.191 "lvol_store_uuid": "f25cd0ed-8668-476f-a3ec-a486c73ad967", 00:16:23.191 "base_bdev": "Nvme0n1", 00:16:23.191 "thin_provision": true, 00:16:23.191 "snapshot": false, 00:16:23.191 "clone": false, 00:16:23.191 "esnap_clone": false 00:16:23.191 } 00:16:23.191 } 00:16:23.191 } 00:16:23.191 ] 00:16:23.448 12:08:30 -- common/autotest_common.sh@895 -- # return 0 00:16:23.448 12:08:30 -- compress/compress.sh@41 -- # '[' -z 512 ']' 00:16:23.448 12:08:30 -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 512 00:16:23.448 [2024-07-25 12:08:30.653316] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:16:23.448 COMP_lvs0/lv0 00:16:23.448 12:08:30 -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:16:23.448 12:08:30 -- common/autotest_common.sh@887 -- # local bdev_name=COMP_lvs0/lv0 00:16:23.448 12:08:30 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:23.448 12:08:30 -- common/autotest_common.sh@889 -- # local i 00:16:23.448 12:08:30 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:23.448 12:08:30 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:23.448 12:08:30 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:23.706 12:08:30 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:16:23.706 [ 00:16:23.706 { 00:16:23.706 "name": "COMP_lvs0/lv0", 00:16:23.706 "aliases": [ 00:16:23.706 "6967134e-1022-5fc7-8f5f-1f48e227790b" 00:16:23.706 ], 00:16:23.706 "product_name": "compress", 00:16:23.706 "block_size": 512, 00:16:23.706 "num_blocks": 200704, 00:16:23.706 "uuid": "6967134e-1022-5fc7-8f5f-1f48e227790b", 00:16:23.706 "assigned_rate_limits": { 00:16:23.706 "rw_ios_per_sec": 0, 00:16:23.706 "rw_mbytes_per_sec": 0, 00:16:23.706 "r_mbytes_per_sec": 0, 00:16:23.706 "w_mbytes_per_sec": 0 00:16:23.706 }, 00:16:23.706 "claimed": false, 00:16:23.706 "zoned": false, 00:16:23.706 "supported_io_types": { 00:16:23.706 "read": true, 00:16:23.706 "write": true, 00:16:23.706 "unmap": false, 00:16:23.706 "write_zeroes": true, 00:16:23.706 "flush": false, 00:16:23.706 "reset": false, 00:16:23.706 "compare": false, 00:16:23.706 "compare_and_write": false, 00:16:23.706 "abort": false, 00:16:23.706 "nvme_admin": false, 00:16:23.706 "nvme_io": false 00:16:23.706 }, 00:16:23.706 "driver_specific": { 00:16:23.706 "compress": { 00:16:23.706 "name": "COMP_lvs0/lv0", 00:16:23.706 "base_bdev_name": "00168b2a-2a76-4bbb-85be-1041a6abdff3" 00:16:23.706 } 00:16:23.706 } 00:16:23.706 } 00:16:23.706 ] 00:16:23.965 12:08:31 -- common/autotest_common.sh@895 -- # return 0 00:16:23.965 12:08:31 -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:23.965 [2024-07-25 12:08:31.099452] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f826c1ad740 PMD being used: compress_qat 00:16:23.965 [2024-07-25 12:08:31.101200] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x2580810 PMD being used: compress_qat 00:16:23.965 Running I/O for 3 seconds... 00:16:27.247 00:16:27.247 Latency(us) 00:16:27.247 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.247 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:16:27.247 Verification LBA range: start 0x0 length 0x3100 00:16:27.247 COMP_lvs0/lv0 : 3.00 8488.50 33.16 0.00 0.00 3749.57 52.31 6582.09 00:16:27.247 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:16:27.247 Verification LBA range: start 0x3100 length 0x3100 00:16:27.247 COMP_lvs0/lv0 : 3.00 9111.34 35.59 0.00 0.00 3495.28 38.07 5983.72 00:16:27.247 =================================================================================================================== 00:16:27.247 Total : 17599.84 68.75 0.00 0.00 3617.94 38.07 6582.09 00:16:27.247 0 00:16:27.247 12:08:34 -- compress/compress.sh@76 -- # destroy_vols 00:16:27.247 12:08:34 -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:16:27.247 12:08:34 -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:16:27.247 12:08:34 -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:27.247 12:08:34 -- compress/compress.sh@78 -- # killprocess 1280400 00:16:27.247 12:08:34 -- common/autotest_common.sh@926 -- # '[' -z 1280400 ']' 00:16:27.247 12:08:34 -- common/autotest_common.sh@930 -- # kill -0 1280400 00:16:27.247 12:08:34 -- common/autotest_common.sh@931 -- # uname 00:16:27.247 12:08:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:27.247 12:08:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1280400 00:16:27.248 12:08:34 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:27.248 12:08:34 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:27.248 12:08:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1280400' 00:16:27.248 killing process with pid 1280400 00:16:27.248 12:08:34 -- common/autotest_common.sh@945 -- # kill 1280400 00:16:27.248 Received shutdown signal, test time was about 3.000000 seconds 00:16:27.248 00:16:27.248 Latency(us) 00:16:27.248 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.248 =================================================================================================================== 00:16:27.248 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:27.248 12:08:34 -- common/autotest_common.sh@950 -- # wait 1280400 00:16:29.148 12:08:36 -- compress/compress.sh@88 -- # run_bdevperf 32 4096 3 4096 00:16:29.148 12:08:36 -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:16:29.148 12:08:36 -- compress/compress.sh@71 -- # bdevperf_pid=1281508 00:16:29.148 12:08:36 -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:29.148 12:08:36 -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 32 -o 4096 -w verify -t 3 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:16:29.148 12:08:36 -- compress/compress.sh@73 -- # waitforlisten 1281508 00:16:29.148 12:08:36 -- common/autotest_common.sh@819 -- # '[' -z 1281508 ']' 00:16:29.148 12:08:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.148 12:08:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:29.148 12:08:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.148 12:08:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:29.148 12:08:36 -- common/autotest_common.sh@10 -- # set +x 00:16:29.148 [2024-07-25 12:08:36.242148] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:29.148 [2024-07-25 12:08:36.242210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281508 ] 00:16:29.148 [2024-07-25 12:08:36.329414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:29.148 [2024-07-25 12:08:36.412428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.148 [2024-07-25 12:08:36.412433] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.714 [2024-07-25 12:08:36.961181] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:16:29.972 12:08:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:29.972 12:08:37 -- common/autotest_common.sh@852 -- # return 0 00:16:29.972 12:08:37 -- compress/compress.sh@74 -- # create_vols 4096 00:16:29.972 12:08:37 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:29.972 12:08:37 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:30.229 [2024-07-25 12:08:37.534436] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc030a0 PMD being used: compress_qat 00:16:30.488 12:08:37 -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:16:30.488 12:08:37 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme0n1 00:16:30.488 12:08:37 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:30.488 12:08:37 -- common/autotest_common.sh@889 -- # local i 00:16:30.488 12:08:37 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:30.488 12:08:37 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:30.488 12:08:37 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:30.488 12:08:37 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:16:30.746 [ 00:16:30.746 { 00:16:30.746 "name": "Nvme0n1", 00:16:30.746 "aliases": [ 00:16:30.746 "01000000-0000-0000-5cd2-e42bec7b5351" 00:16:30.746 ], 00:16:30.746 "product_name": "NVMe disk", 00:16:30.746 "block_size": 512, 00:16:30.746 "num_blocks": 7501476528, 00:16:30.746 "uuid": "01000000-0000-0000-5cd2-e42bec7b5351", 00:16:30.746 "assigned_rate_limits": { 00:16:30.746 "rw_ios_per_sec": 0, 00:16:30.746 "rw_mbytes_per_sec": 0, 00:16:30.746 "r_mbytes_per_sec": 0, 00:16:30.746 "w_mbytes_per_sec": 0 00:16:30.746 }, 00:16:30.746 "claimed": false, 00:16:30.746 "zoned": false, 00:16:30.746 "supported_io_types": { 00:16:30.746 "read": true, 00:16:30.746 "write": true, 00:16:30.746 "unmap": true, 00:16:30.746 "write_zeroes": true, 00:16:30.746 "flush": true, 00:16:30.746 "reset": true, 00:16:30.746 "compare": false, 00:16:30.746 "compare_and_write": false, 00:16:30.746 "abort": true, 00:16:30.746 "nvme_admin": true, 00:16:30.746 "nvme_io": true 00:16:30.746 }, 00:16:30.746 "driver_specific": { 00:16:30.746 "nvme": [ 00:16:30.746 { 00:16:30.746 "pci_address": "0000:5e:00.0", 00:16:30.746 "trid": { 00:16:30.746 "trtype": "PCIe", 00:16:30.746 "traddr": "0000:5e:00.0" 00:16:30.746 }, 00:16:30.746 "ctrlr_data": { 00:16:30.746 "cntlid": 0, 00:16:30.746 "vendor_id": "0x8086", 00:16:30.746 "model_number": "INTEL SSDPF2KX038T1", 00:16:30.746 "serial_number": "PHAX137100D13P8CGN", 00:16:30.746 "firmware_revision": "9CV10015", 00:16:30.746 "subnqn": "nqn.2021-09.com.intel:PHAX137100D13P8CGN ", 00:16:30.746 "oacs": { 00:16:30.746 "security": 0, 00:16:30.746 "format": 1, 00:16:30.746 "firmware": 1, 00:16:30.746 "ns_manage": 1 00:16:30.746 }, 00:16:30.746 "multi_ctrlr": false, 00:16:30.746 "ana_reporting": false 00:16:30.746 }, 00:16:30.746 "vs": { 00:16:30.746 "nvme_version": "1.4" 00:16:30.746 }, 00:16:30.746 "ns_data": { 00:16:30.746 "id": 1, 00:16:30.746 "can_share": false 00:16:30.746 } 00:16:30.746 } 00:16:30.746 ], 00:16:30.746 "mp_policy": "active_passive" 00:16:30.746 } 00:16:30.746 } 00:16:30.746 ] 00:16:30.746 12:08:37 -- common/autotest_common.sh@895 -- # return 0 00:16:30.746 12:08:37 -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:16:30.746 [2024-07-25 12:08:38.054656] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc039e0 PMD being used: compress_qat 00:16:31.004 819a9070-57f7-4fad-af67-02ad1e6c28c8 00:16:31.004 12:08:38 -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:16:31.004 2ed35c5b-a5e7-4b9d-a02f-2d572b5dd694 00:16:31.004 12:08:38 -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:16:31.004 12:08:38 -- common/autotest_common.sh@887 -- # local bdev_name=lvs0/lv0 00:16:31.004 12:08:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:31.004 12:08:38 -- common/autotest_common.sh@889 -- # local i 00:16:31.004 12:08:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:31.004 12:08:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:31.004 12:08:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.262 12:08:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:16:31.521 [ 00:16:31.521 { 00:16:31.521 "name": "2ed35c5b-a5e7-4b9d-a02f-2d572b5dd694", 00:16:31.521 "aliases": [ 00:16:31.521 "lvs0/lv0" 00:16:31.521 ], 00:16:31.521 "product_name": "Logical Volume", 00:16:31.521 "block_size": 512, 00:16:31.521 "num_blocks": 204800, 00:16:31.521 "uuid": "2ed35c5b-a5e7-4b9d-a02f-2d572b5dd694", 00:16:31.521 "assigned_rate_limits": { 00:16:31.521 "rw_ios_per_sec": 0, 00:16:31.521 "rw_mbytes_per_sec": 0, 00:16:31.521 "r_mbytes_per_sec": 0, 00:16:31.521 "w_mbytes_per_sec": 0 00:16:31.521 }, 00:16:31.521 "claimed": false, 00:16:31.521 "zoned": false, 00:16:31.521 "supported_io_types": { 00:16:31.521 "read": true, 00:16:31.521 "write": true, 00:16:31.521 "unmap": true, 00:16:31.521 "write_zeroes": true, 00:16:31.521 "flush": false, 00:16:31.521 "reset": true, 00:16:31.521 "compare": false, 00:16:31.521 "compare_and_write": false, 00:16:31.521 "abort": false, 00:16:31.521 "nvme_admin": false, 00:16:31.521 "nvme_io": false 00:16:31.521 }, 00:16:31.521 "driver_specific": { 00:16:31.521 "lvol": { 00:16:31.521 "lvol_store_uuid": "819a9070-57f7-4fad-af67-02ad1e6c28c8", 00:16:31.521 "base_bdev": "Nvme0n1", 00:16:31.521 "thin_provision": true, 00:16:31.521 "snapshot": false, 00:16:31.521 "clone": false, 00:16:31.521 "esnap_clone": false 00:16:31.521 } 00:16:31.521 } 00:16:31.521 } 00:16:31.521 ] 00:16:31.521 12:08:38 -- common/autotest_common.sh@895 -- # return 0 00:16:31.521 12:08:38 -- compress/compress.sh@41 -- # '[' -z 4096 ']' 00:16:31.521 12:08:38 -- compress/compress.sh@44 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l 4096 00:16:31.521 [2024-07-25 12:08:38.737548] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:16:31.521 COMP_lvs0/lv0 00:16:31.521 12:08:38 -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:16:31.521 12:08:38 -- common/autotest_common.sh@887 -- # local bdev_name=COMP_lvs0/lv0 00:16:31.521 12:08:38 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:31.521 12:08:38 -- common/autotest_common.sh@889 -- # local i 00:16:31.521 12:08:38 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:31.521 12:08:38 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:31.521 12:08:38 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:31.831 12:08:38 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:16:31.831 [ 00:16:31.831 { 00:16:31.831 "name": "COMP_lvs0/lv0", 00:16:31.831 "aliases": [ 00:16:31.831 "440e2b22-abc9-51df-97c9-d2959ef3fc72" 00:16:31.831 ], 00:16:31.831 "product_name": "compress", 00:16:31.831 "block_size": 4096, 00:16:31.831 "num_blocks": 25088, 00:16:31.831 "uuid": "440e2b22-abc9-51df-97c9-d2959ef3fc72", 00:16:31.831 "assigned_rate_limits": { 00:16:31.831 "rw_ios_per_sec": 0, 00:16:31.831 "rw_mbytes_per_sec": 0, 00:16:31.831 "r_mbytes_per_sec": 0, 00:16:31.831 "w_mbytes_per_sec": 0 00:16:31.831 }, 00:16:31.831 "claimed": false, 00:16:31.831 "zoned": false, 00:16:31.831 "supported_io_types": { 00:16:31.831 "read": true, 00:16:31.831 "write": true, 00:16:31.831 "unmap": false, 00:16:31.831 "write_zeroes": true, 00:16:31.831 "flush": false, 00:16:31.831 "reset": false, 00:16:31.831 "compare": false, 00:16:31.831 "compare_and_write": false, 00:16:31.831 "abort": false, 00:16:31.831 "nvme_admin": false, 00:16:31.831 "nvme_io": false 00:16:31.831 }, 00:16:31.831 "driver_specific": { 00:16:31.831 "compress": { 00:16:31.831 "name": "COMP_lvs0/lv0", 00:16:31.831 "base_bdev_name": "2ed35c5b-a5e7-4b9d-a02f-2d572b5dd694" 00:16:31.831 } 00:16:31.831 } 00:16:31.831 } 00:16:31.831 ] 00:16:31.832 12:08:39 -- common/autotest_common.sh@895 -- # return 0 00:16:31.832 12:08:39 -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:32.090 [2024-07-25 12:08:39.167563] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7fedbc1ad740 PMD being used: compress_qat 00:16:32.090 [2024-07-25 12:08:39.169276] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0xc00810 PMD being used: compress_qat 00:16:32.090 Running I/O for 3 seconds... 00:16:35.372 00:16:35.372 Latency(us) 00:16:35.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.372 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 4096) 00:16:35.372 Verification LBA range: start 0x0 length 0x3100 00:16:35.372 COMP_lvs0/lv0 : 3.00 8482.20 33.13 0.00 0.00 3753.54 37.84 6952.51 00:16:35.372 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 32, IO size: 4096) 00:16:35.372 Verification LBA range: start 0x3100 length 0x3100 00:16:35.372 COMP_lvs0/lv0 : 3.00 9074.14 35.45 0.00 0.00 3509.49 71.23 6753.06 00:16:35.372 =================================================================================================================== 00:16:35.372 Total : 17556.34 68.58 0.00 0.00 3627.42 37.84 6952.51 00:16:35.372 0 00:16:35.372 12:08:42 -- compress/compress.sh@76 -- # destroy_vols 00:16:35.372 12:08:42 -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:16:35.372 12:08:42 -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:16:35.372 12:08:42 -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:16:35.372 12:08:42 -- compress/compress.sh@78 -- # killprocess 1281508 00:16:35.372 12:08:42 -- common/autotest_common.sh@926 -- # '[' -z 1281508 ']' 00:16:35.372 12:08:42 -- common/autotest_common.sh@930 -- # kill -0 1281508 00:16:35.372 12:08:42 -- common/autotest_common.sh@931 -- # uname 00:16:35.372 12:08:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:35.372 12:08:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1281508 00:16:35.372 12:08:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:35.372 12:08:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:35.372 12:08:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1281508' 00:16:35.372 killing process with pid 1281508 00:16:35.372 12:08:42 -- common/autotest_common.sh@945 -- # kill 1281508 00:16:35.372 Received shutdown signal, test time was about 3.000000 seconds 00:16:35.372 00:16:35.372 Latency(us) 00:16:35.372 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:35.372 =================================================================================================================== 00:16:35.372 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:35.372 12:08:42 -- common/autotest_common.sh@950 -- # wait 1281508 00:16:37.273 12:08:44 -- compress/compress.sh@89 -- # run_bdevio 00:16:37.273 12:08:44 -- compress/compress.sh@50 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:16:37.273 12:08:44 -- compress/compress.sh@55 -- # bdevio_pid=1282616 00:16:37.273 12:08:44 -- compress/compress.sh@56 -- # trap 'killprocess $bdevio_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:37.273 12:08:44 -- compress/compress.sh@51 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/bdevio -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json -w 00:16:37.273 12:08:44 -- compress/compress.sh@57 -- # waitforlisten 1282616 00:16:37.273 12:08:44 -- common/autotest_common.sh@819 -- # '[' -z 1282616 ']' 00:16:37.273 12:08:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:37.273 12:08:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:37.273 12:08:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:37.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:37.273 12:08:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:37.273 12:08:44 -- common/autotest_common.sh@10 -- # set +x 00:16:37.273 [2024-07-25 12:08:44.327463] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:37.273 [2024-07-25 12:08:44.327524] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1282616 ] 00:16:37.273 [2024-07-25 12:08:44.415100] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:37.273 [2024-07-25 12:08:44.500211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:37.273 [2024-07-25 12:08:44.500302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:37.273 [2024-07-25 12:08:44.500306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.862 [2024-07-25 12:08:45.033808] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:16:37.862 12:08:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:37.862 12:08:45 -- common/autotest_common.sh@852 -- # return 0 00:16:37.862 12:08:45 -- compress/compress.sh@58 -- # create_vols 00:16:37.862 12:08:45 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:37.862 12:08:45 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:38.439 [2024-07-25 12:08:45.616583] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x25c4b60 PMD being used: compress_qat 00:16:38.439 12:08:45 -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:16:38.439 12:08:45 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme0n1 00:16:38.439 12:08:45 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:38.439 12:08:45 -- common/autotest_common.sh@889 -- # local i 00:16:38.439 12:08:45 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:38.439 12:08:45 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:38.439 12:08:45 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:38.698 12:08:45 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:16:38.698 [ 00:16:38.698 { 00:16:38.698 "name": "Nvme0n1", 00:16:38.698 "aliases": [ 00:16:38.698 "01000000-0000-0000-5cd2-e42bec7b5351" 00:16:38.698 ], 00:16:38.698 "product_name": "NVMe disk", 00:16:38.698 "block_size": 512, 00:16:38.698 "num_blocks": 7501476528, 00:16:38.698 "uuid": "01000000-0000-0000-5cd2-e42bec7b5351", 00:16:38.698 "assigned_rate_limits": { 00:16:38.698 "rw_ios_per_sec": 0, 00:16:38.698 "rw_mbytes_per_sec": 0, 00:16:38.698 "r_mbytes_per_sec": 0, 00:16:38.698 "w_mbytes_per_sec": 0 00:16:38.698 }, 00:16:38.698 "claimed": false, 00:16:38.698 "zoned": false, 00:16:38.698 "supported_io_types": { 00:16:38.698 "read": true, 00:16:38.698 "write": true, 00:16:38.698 "unmap": true, 00:16:38.698 "write_zeroes": true, 00:16:38.698 "flush": true, 00:16:38.698 "reset": true, 00:16:38.698 "compare": false, 00:16:38.698 "compare_and_write": false, 00:16:38.698 "abort": true, 00:16:38.698 "nvme_admin": true, 00:16:38.698 "nvme_io": true 00:16:38.698 }, 00:16:38.698 "driver_specific": { 00:16:38.698 "nvme": [ 00:16:38.698 { 00:16:38.698 "pci_address": "0000:5e:00.0", 00:16:38.698 "trid": { 00:16:38.698 "trtype": "PCIe", 00:16:38.698 "traddr": "0000:5e:00.0" 00:16:38.698 }, 00:16:38.698 "ctrlr_data": { 00:16:38.698 "cntlid": 0, 00:16:38.698 "vendor_id": "0x8086", 00:16:38.698 "model_number": "INTEL SSDPF2KX038T1", 00:16:38.698 "serial_number": "PHAX137100D13P8CGN", 00:16:38.698 "firmware_revision": "9CV10015", 00:16:38.698 "subnqn": "nqn.2021-09.com.intel:PHAX137100D13P8CGN ", 00:16:38.698 "oacs": { 00:16:38.698 "security": 0, 00:16:38.698 "format": 1, 00:16:38.698 "firmware": 1, 00:16:38.698 "ns_manage": 1 00:16:38.698 }, 00:16:38.698 "multi_ctrlr": false, 00:16:38.698 "ana_reporting": false 00:16:38.698 }, 00:16:38.698 "vs": { 00:16:38.698 "nvme_version": "1.4" 00:16:38.698 }, 00:16:38.698 "ns_data": { 00:16:38.698 "id": 1, 00:16:38.698 "can_share": false 00:16:38.698 } 00:16:38.698 } 00:16:38.698 ], 00:16:38.698 "mp_policy": "active_passive" 00:16:38.698 } 00:16:38.698 } 00:16:38.698 ] 00:16:38.698 12:08:45 -- common/autotest_common.sh@895 -- # return 0 00:16:38.698 12:08:45 -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:16:38.955 [2024-07-25 12:08:46.136824] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x24184e0 PMD being used: compress_qat 00:16:38.955 44a2d78f-351e-476b-971c-dc1c6ec8a489 00:16:38.955 12:08:46 -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:16:39.213 7d9081d3-1a18-44c8-9a2d-6fcda4aed334 00:16:39.213 12:08:46 -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:16:39.213 12:08:46 -- common/autotest_common.sh@887 -- # local bdev_name=lvs0/lv0 00:16:39.213 12:08:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:39.213 12:08:46 -- common/autotest_common.sh@889 -- # local i 00:16:39.213 12:08:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:39.213 12:08:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:39.213 12:08:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.213 12:08:46 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:16:39.471 [ 00:16:39.471 { 00:16:39.471 "name": "7d9081d3-1a18-44c8-9a2d-6fcda4aed334", 00:16:39.471 "aliases": [ 00:16:39.471 "lvs0/lv0" 00:16:39.471 ], 00:16:39.471 "product_name": "Logical Volume", 00:16:39.471 "block_size": 512, 00:16:39.471 "num_blocks": 204800, 00:16:39.471 "uuid": "7d9081d3-1a18-44c8-9a2d-6fcda4aed334", 00:16:39.471 "assigned_rate_limits": { 00:16:39.471 "rw_ios_per_sec": 0, 00:16:39.471 "rw_mbytes_per_sec": 0, 00:16:39.471 "r_mbytes_per_sec": 0, 00:16:39.471 "w_mbytes_per_sec": 0 00:16:39.471 }, 00:16:39.471 "claimed": false, 00:16:39.471 "zoned": false, 00:16:39.471 "supported_io_types": { 00:16:39.471 "read": true, 00:16:39.471 "write": true, 00:16:39.471 "unmap": true, 00:16:39.471 "write_zeroes": true, 00:16:39.471 "flush": false, 00:16:39.471 "reset": true, 00:16:39.471 "compare": false, 00:16:39.471 "compare_and_write": false, 00:16:39.471 "abort": false, 00:16:39.471 "nvme_admin": false, 00:16:39.471 "nvme_io": false 00:16:39.471 }, 00:16:39.471 "driver_specific": { 00:16:39.471 "lvol": { 00:16:39.471 "lvol_store_uuid": "44a2d78f-351e-476b-971c-dc1c6ec8a489", 00:16:39.471 "base_bdev": "Nvme0n1", 00:16:39.471 "thin_provision": true, 00:16:39.471 "snapshot": false, 00:16:39.471 "clone": false, 00:16:39.471 "esnap_clone": false 00:16:39.471 } 00:16:39.471 } 00:16:39.471 } 00:16:39.471 ] 00:16:39.471 12:08:46 -- common/autotest_common.sh@895 -- # return 0 00:16:39.471 12:08:46 -- compress/compress.sh@41 -- # '[' -z '' ']' 00:16:39.471 12:08:46 -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:16:39.729 [2024-07-25 12:08:46.835869] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:16:39.729 COMP_lvs0/lv0 00:16:39.729 12:08:46 -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:16:39.729 12:08:46 -- common/autotest_common.sh@887 -- # local bdev_name=COMP_lvs0/lv0 00:16:39.729 12:08:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:39.729 12:08:46 -- common/autotest_common.sh@889 -- # local i 00:16:39.729 12:08:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:39.729 12:08:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:39.729 12:08:46 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:39.729 12:08:47 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:16:39.986 [ 00:16:39.986 { 00:16:39.986 "name": "COMP_lvs0/lv0", 00:16:39.986 "aliases": [ 00:16:39.986 "7be7dcba-bddc-55d5-8d98-25d90def4e65" 00:16:39.986 ], 00:16:39.986 "product_name": "compress", 00:16:39.986 "block_size": 512, 00:16:39.986 "num_blocks": 200704, 00:16:39.986 "uuid": "7be7dcba-bddc-55d5-8d98-25d90def4e65", 00:16:39.986 "assigned_rate_limits": { 00:16:39.986 "rw_ios_per_sec": 0, 00:16:39.986 "rw_mbytes_per_sec": 0, 00:16:39.986 "r_mbytes_per_sec": 0, 00:16:39.986 "w_mbytes_per_sec": 0 00:16:39.986 }, 00:16:39.986 "claimed": false, 00:16:39.986 "zoned": false, 00:16:39.986 "supported_io_types": { 00:16:39.986 "read": true, 00:16:39.986 "write": true, 00:16:39.986 "unmap": false, 00:16:39.986 "write_zeroes": true, 00:16:39.986 "flush": false, 00:16:39.986 "reset": false, 00:16:39.986 "compare": false, 00:16:39.986 "compare_and_write": false, 00:16:39.986 "abort": false, 00:16:39.986 "nvme_admin": false, 00:16:39.986 "nvme_io": false 00:16:39.986 }, 00:16:39.986 "driver_specific": { 00:16:39.986 "compress": { 00:16:39.986 "name": "COMP_lvs0/lv0", 00:16:39.986 "base_bdev_name": "7d9081d3-1a18-44c8-9a2d-6fcda4aed334" 00:16:39.986 } 00:16:39.986 } 00:16:39.986 } 00:16:39.986 ] 00:16:39.986 12:08:47 -- common/autotest_common.sh@895 -- # return 0 00:16:39.986 12:08:47 -- compress/compress.sh@59 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/test/bdev/bdevio/tests.py perform_tests 00:16:39.986 [2024-07-25 12:08:47.256863] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f98281ad4d0 PMD being used: compress_qat 00:16:39.986 I/O targets: 00:16:39.986 COMP_lvs0/lv0: 200704 blocks of 512 bytes (98 MiB) 00:16:39.987 00:16:39.987 00:16:39.987 CUnit - A unit testing framework for C - Version 2.1-3 00:16:39.987 http://cunit.sourceforge.net/ 00:16:39.987 00:16:39.987 00:16:39.987 Suite: bdevio tests on: COMP_lvs0/lv0 00:16:39.987 Test: blockdev write read block ...passed 00:16:39.987 Test: blockdev write zeroes read block ...passed 00:16:39.987 Test: blockdev write zeroes read no split ...passed 00:16:39.987 Test: blockdev write zeroes read split ...passed 00:16:39.987 Test: blockdev write zeroes read split partial ...passed 00:16:39.987 Test: blockdev reset ...[2024-07-25 12:08:47.293847] vbdev_compress.c: 252:vbdev_compress_submit_request: *ERROR*: Unknown I/O type 5 00:16:39.987 passed 00:16:39.987 Test: blockdev write read 8 blocks ...passed 00:16:39.987 Test: blockdev write read size > 128k ...passed 00:16:39.987 Test: blockdev write read invalid size ...passed 00:16:39.987 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:39.987 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:39.987 Test: blockdev write read max offset ...passed 00:16:39.987 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:40.245 Test: blockdev writev readv 8 blocks ...passed 00:16:40.245 Test: blockdev writev readv 30 x 1block ...passed 00:16:40.245 Test: blockdev writev readv block ...passed 00:16:40.245 Test: blockdev writev readv size > 128k ...passed 00:16:40.245 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:40.245 Test: blockdev comparev and writev ...passed 00:16:40.245 Test: blockdev nvme passthru rw ...passed 00:16:40.245 Test: blockdev nvme passthru vendor specific ...passed 00:16:40.245 Test: blockdev nvme admin passthru ...passed 00:16:40.245 Test: blockdev copy ...passed 00:16:40.245 00:16:40.245 Run Summary: Type Total Ran Passed Failed Inactive 00:16:40.245 suites 1 1 n/a 0 0 00:16:40.245 tests 23 23 23 0 0 00:16:40.245 asserts 130 130 130 0 n/a 00:16:40.245 00:16:40.245 Elapsed time = 0.091 seconds 00:16:40.245 0 00:16:40.245 12:08:47 -- compress/compress.sh@60 -- # destroy_vols 00:16:40.245 12:08:47 -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:16:40.245 12:08:47 -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:16:40.504 12:08:47 -- compress/compress.sh@61 -- # trap - SIGINT SIGTERM EXIT 00:16:40.504 12:08:47 -- compress/compress.sh@62 -- # killprocess 1282616 00:16:40.504 12:08:47 -- common/autotest_common.sh@926 -- # '[' -z 1282616 ']' 00:16:40.504 12:08:47 -- common/autotest_common.sh@930 -- # kill -0 1282616 00:16:40.504 12:08:47 -- common/autotest_common.sh@931 -- # uname 00:16:40.504 12:08:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:40.504 12:08:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1282616 00:16:40.504 12:08:47 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:40.504 12:08:47 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:40.504 12:08:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1282616' 00:16:40.504 killing process with pid 1282616 00:16:40.504 12:08:47 -- common/autotest_common.sh@945 -- # kill 1282616 00:16:40.504 12:08:47 -- common/autotest_common.sh@950 -- # wait 1282616 00:16:42.402 12:08:49 -- compress/compress.sh@91 -- # '[' 1 -eq 1 ']' 00:16:42.402 12:08:49 -- compress/compress.sh@92 -- # run_bdevperf 64 16384 30 00:16:42.402 12:08:49 -- compress/compress.sh@66 -- # [[ compdev == \c\o\m\p\d\e\v ]] 00:16:42.402 12:08:49 -- compress/compress.sh@71 -- # bdevperf_pid=1283365 00:16:42.402 12:08:49 -- compress/compress.sh@72 -- # trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:42.402 12:08:49 -- compress/compress.sh@67 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/build/examples/bdevperf -z -q 64 -o 16384 -w verify -t 30 -C -m 0x6 -c /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/dpdk.json 00:16:42.402 12:08:49 -- compress/compress.sh@73 -- # waitforlisten 1283365 00:16:42.402 12:08:49 -- common/autotest_common.sh@819 -- # '[' -z 1283365 ']' 00:16:42.402 12:08:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.402 12:08:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:42.402 12:08:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.402 12:08:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:42.402 12:08:49 -- common/autotest_common.sh@10 -- # set +x 00:16:42.402 [2024-07-25 12:08:49.439339] Starting SPDK v24.01.1-pre git sha1 dbef7efac / DPDK 23.11.0 initialization... 00:16:42.402 [2024-07-25 12:08:49.439408] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x6 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1283365 ] 00:16:42.402 [2024-07-25 12:08:49.526154] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:42.402 [2024-07-25 12:08:49.612016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:42.402 [2024-07-25 12:08:49.612019] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.967 [2024-07-25 12:08:50.154383] accel_dpdk_compressdev.c: 296:accel_init_compress_drivers: *NOTICE*: initialized QAT PMD 00:16:42.967 12:08:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:42.967 12:08:50 -- common/autotest_common.sh@852 -- # return 0 00:16:42.967 12:08:50 -- compress/compress.sh@74 -- # create_vols 00:16:42.967 12:08:50 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/gen_nvme.sh 00:16:42.967 12:08:50 -- compress/compress.sh@34 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:16:43.529 [2024-07-25 12:08:50.718497] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cb10a0 PMD being used: compress_qat 00:16:43.529 12:08:50 -- compress/compress.sh@35 -- # waitforbdev Nvme0n1 00:16:43.529 12:08:50 -- common/autotest_common.sh@887 -- # local bdev_name=Nvme0n1 00:16:43.529 12:08:50 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:43.529 12:08:50 -- common/autotest_common.sh@889 -- # local i 00:16:43.529 12:08:50 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:43.529 12:08:50 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:43.529 12:08:50 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:43.785 12:08:50 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b Nvme0n1 -t 2000 00:16:43.785 [ 00:16:43.785 { 00:16:43.785 "name": "Nvme0n1", 00:16:43.785 "aliases": [ 00:16:43.785 "01000000-0000-0000-5cd2-e42bec7b5351" 00:16:43.785 ], 00:16:43.785 "product_name": "NVMe disk", 00:16:43.785 "block_size": 512, 00:16:43.785 "num_blocks": 7501476528, 00:16:43.785 "uuid": "01000000-0000-0000-5cd2-e42bec7b5351", 00:16:43.785 "assigned_rate_limits": { 00:16:43.785 "rw_ios_per_sec": 0, 00:16:43.785 "rw_mbytes_per_sec": 0, 00:16:43.785 "r_mbytes_per_sec": 0, 00:16:43.785 "w_mbytes_per_sec": 0 00:16:43.785 }, 00:16:43.785 "claimed": false, 00:16:43.785 "zoned": false, 00:16:43.785 "supported_io_types": { 00:16:43.785 "read": true, 00:16:43.785 "write": true, 00:16:43.785 "unmap": true, 00:16:43.785 "write_zeroes": true, 00:16:43.785 "flush": true, 00:16:43.785 "reset": true, 00:16:43.785 "compare": false, 00:16:43.785 "compare_and_write": false, 00:16:43.785 "abort": true, 00:16:43.785 "nvme_admin": true, 00:16:43.785 "nvme_io": true 00:16:43.785 }, 00:16:43.785 "driver_specific": { 00:16:43.785 "nvme": [ 00:16:43.785 { 00:16:43.785 "pci_address": "0000:5e:00.0", 00:16:43.785 "trid": { 00:16:43.785 "trtype": "PCIe", 00:16:43.785 "traddr": "0000:5e:00.0" 00:16:43.785 }, 00:16:43.785 "ctrlr_data": { 00:16:43.785 "cntlid": 0, 00:16:43.785 "vendor_id": "0x8086", 00:16:43.785 "model_number": "INTEL SSDPF2KX038T1", 00:16:43.785 "serial_number": "PHAX137100D13P8CGN", 00:16:43.785 "firmware_revision": "9CV10015", 00:16:43.785 "subnqn": "nqn.2021-09.com.intel:PHAX137100D13P8CGN ", 00:16:43.785 "oacs": { 00:16:43.785 "security": 0, 00:16:43.785 "format": 1, 00:16:43.785 "firmware": 1, 00:16:43.785 "ns_manage": 1 00:16:43.785 }, 00:16:43.785 "multi_ctrlr": false, 00:16:43.785 "ana_reporting": false 00:16:43.785 }, 00:16:43.785 "vs": { 00:16:43.785 "nvme_version": "1.4" 00:16:43.785 }, 00:16:43.785 "ns_data": { 00:16:43.786 "id": 1, 00:16:43.786 "can_share": false 00:16:43.786 } 00:16:43.786 } 00:16:43.786 ], 00:16:43.786 "mp_policy": "active_passive" 00:16:43.786 } 00:16:43.786 } 00:16:43.786 ] 00:16:43.786 12:08:51 -- common/autotest_common.sh@895 -- # return 0 00:16:43.786 12:08:51 -- compress/compress.sh@37 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none Nvme0n1 lvs0 00:16:44.043 [2024-07-25 12:08:51.230588] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cb19e0 PMD being used: compress_qat 00:16:44.043 3c15cd99-42b5-4156-8f11-19789ee2a7ed 00:16:44.043 12:08:51 -- compress/compress.sh@38 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -t -l lvs0 lv0 100 00:16:44.300 6dd2d11b-8523-4843-b31b-3a2f5fcea1ac 00:16:44.300 12:08:51 -- compress/compress.sh@39 -- # waitforbdev lvs0/lv0 00:16:44.300 12:08:51 -- common/autotest_common.sh@887 -- # local bdev_name=lvs0/lv0 00:16:44.300 12:08:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.300 12:08:51 -- common/autotest_common.sh@889 -- # local i 00:16:44.300 12:08:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.300 12:08:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.300 12:08:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.300 12:08:51 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b lvs0/lv0 -t 2000 00:16:44.558 [ 00:16:44.558 { 00:16:44.558 "name": "6dd2d11b-8523-4843-b31b-3a2f5fcea1ac", 00:16:44.558 "aliases": [ 00:16:44.558 "lvs0/lv0" 00:16:44.558 ], 00:16:44.558 "product_name": "Logical Volume", 00:16:44.558 "block_size": 512, 00:16:44.558 "num_blocks": 204800, 00:16:44.558 "uuid": "6dd2d11b-8523-4843-b31b-3a2f5fcea1ac", 00:16:44.558 "assigned_rate_limits": { 00:16:44.558 "rw_ios_per_sec": 0, 00:16:44.558 "rw_mbytes_per_sec": 0, 00:16:44.558 "r_mbytes_per_sec": 0, 00:16:44.558 "w_mbytes_per_sec": 0 00:16:44.558 }, 00:16:44.558 "claimed": false, 00:16:44.558 "zoned": false, 00:16:44.558 "supported_io_types": { 00:16:44.558 "read": true, 00:16:44.558 "write": true, 00:16:44.558 "unmap": true, 00:16:44.558 "write_zeroes": true, 00:16:44.558 "flush": false, 00:16:44.558 "reset": true, 00:16:44.558 "compare": false, 00:16:44.558 "compare_and_write": false, 00:16:44.558 "abort": false, 00:16:44.558 "nvme_admin": false, 00:16:44.558 "nvme_io": false 00:16:44.558 }, 00:16:44.558 "driver_specific": { 00:16:44.558 "lvol": { 00:16:44.558 "lvol_store_uuid": "3c15cd99-42b5-4156-8f11-19789ee2a7ed", 00:16:44.558 "base_bdev": "Nvme0n1", 00:16:44.558 "thin_provision": true, 00:16:44.558 "snapshot": false, 00:16:44.558 "clone": false, 00:16:44.558 "esnap_clone": false 00:16:44.558 } 00:16:44.558 } 00:16:44.558 } 00:16:44.558 ] 00:16:44.558 12:08:51 -- common/autotest_common.sh@895 -- # return 0 00:16:44.558 12:08:51 -- compress/compress.sh@41 -- # '[' -z '' ']' 00:16:44.558 12:08:51 -- compress/compress.sh@42 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem 00:16:44.816 [2024-07-25 12:08:51.897559] vbdev_compress.c:1016:vbdev_compress_claim: *NOTICE*: registered io_device and virtual bdev for: COMP_lvs0/lv0 00:16:44.816 COMP_lvs0/lv0 00:16:44.816 12:08:51 -- compress/compress.sh@46 -- # waitforbdev COMP_lvs0/lv0 00:16:44.816 12:08:51 -- common/autotest_common.sh@887 -- # local bdev_name=COMP_lvs0/lv0 00:16:44.816 12:08:51 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:16:44.816 12:08:51 -- common/autotest_common.sh@889 -- # local i 00:16:44.816 12:08:51 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:16:44.816 12:08:51 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:16:44.816 12:08:51 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:44.816 12:08:52 -- common/autotest_common.sh@894 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b COMP_lvs0/lv0 -t 2000 00:16:45.075 [ 00:16:45.075 { 00:16:45.075 "name": "COMP_lvs0/lv0", 00:16:45.075 "aliases": [ 00:16:45.075 "6b6d2875-1621-57dd-87c7-5e5b0b92720a" 00:16:45.075 ], 00:16:45.075 "product_name": "compress", 00:16:45.075 "block_size": 512, 00:16:45.075 "num_blocks": 200704, 00:16:45.075 "uuid": "6b6d2875-1621-57dd-87c7-5e5b0b92720a", 00:16:45.075 "assigned_rate_limits": { 00:16:45.075 "rw_ios_per_sec": 0, 00:16:45.075 "rw_mbytes_per_sec": 0, 00:16:45.075 "r_mbytes_per_sec": 0, 00:16:45.075 "w_mbytes_per_sec": 0 00:16:45.075 }, 00:16:45.075 "claimed": false, 00:16:45.075 "zoned": false, 00:16:45.075 "supported_io_types": { 00:16:45.075 "read": true, 00:16:45.075 "write": true, 00:16:45.075 "unmap": false, 00:16:45.075 "write_zeroes": true, 00:16:45.075 "flush": false, 00:16:45.075 "reset": false, 00:16:45.075 "compare": false, 00:16:45.075 "compare_and_write": false, 00:16:45.075 "abort": false, 00:16:45.075 "nvme_admin": false, 00:16:45.075 "nvme_io": false 00:16:45.075 }, 00:16:45.075 "driver_specific": { 00:16:45.075 "compress": { 00:16:45.075 "name": "COMP_lvs0/lv0", 00:16:45.075 "base_bdev_name": "6dd2d11b-8523-4843-b31b-3a2f5fcea1ac" 00:16:45.075 } 00:16:45.075 } 00:16:45.075 } 00:16:45.075 ] 00:16:45.075 12:08:52 -- common/autotest_common.sh@895 -- # return 0 00:16:45.075 12:08:52 -- compress/compress.sh@75 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:45.075 [2024-07-25 12:08:52.331647] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x7f086c1ad740 PMD being used: compress_qat 00:16:45.075 [2024-07-25 12:08:52.333376] accel_dpdk_compressdev.c: 690:_set_pmd: *NOTICE*: Channel 0x1cb0ba0 PMD being used: compress_qat 00:16:45.075 Running I/O for 30 seconds... 00:17:17.112 00:17:17.112 Latency(us) 00:17:17.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.112 Job: COMP_lvs0/lv0 (Core Mask 0x2, workload: verify, depth: 64, IO size: 16384) 00:17:17.112 Verification LBA range: start 0x0 length 0xc40 00:17:17.112 COMP_lvs0/lv0 : 30.01 2539.49 39.68 0.00 0.00 25109.29 324.12 21769.35 00:17:17.112 Job: COMP_lvs0/lv0 (Core Mask 0x4, workload: verify, depth: 64, IO size: 16384) 00:17:17.112 Verification LBA range: start 0xc40 length 0xc40 00:17:17.112 COMP_lvs0/lv0 : 30.00 10111.67 157.99 0.00 0.00 6281.53 414.94 16526.47 00:17:17.112 =================================================================================================================== 00:17:17.112 Total : 12651.16 197.67 0.00 0.00 10061.02 324.12 21769.35 00:17:17.112 0 00:17:17.112 12:09:22 -- compress/compress.sh@76 -- # destroy_vols 00:17:17.112 12:09:22 -- compress/compress.sh@29 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_compress_delete COMP_lvs0/lv0 00:17:17.112 12:09:22 -- compress/compress.sh@30 -- # /var/jenkins/workspace/crypto-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs0 00:17:17.112 12:09:22 -- compress/compress.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:17:17.112 12:09:22 -- compress/compress.sh@78 -- # killprocess 1283365 00:17:17.112 12:09:22 -- common/autotest_common.sh@926 -- # '[' -z 1283365 ']' 00:17:17.112 12:09:22 -- common/autotest_common.sh@930 -- # kill -0 1283365 00:17:17.112 12:09:22 -- common/autotest_common.sh@931 -- # uname 00:17:17.112 12:09:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:17.112 12:09:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 1283365 00:17:17.112 12:09:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:17.112 12:09:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:17.112 12:09:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 1283365' 00:17:17.112 killing process with pid 1283365 00:17:17.112 12:09:22 -- common/autotest_common.sh@945 -- # kill 1283365 00:17:17.112 Received shutdown signal, test time was about 30.000000 seconds 00:17:17.112 00:17:17.112 Latency(us) 00:17:17.112 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:17.112 =================================================================================================================== 00:17:17.112 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:17.112 12:09:22 -- common/autotest_common.sh@950 -- # wait 1283365 00:17:17.369 12:09:24 -- compress/compress.sh@95 -- # export TEST_TRANSPORT=tcp 00:17:17.369 12:09:24 -- compress/compress.sh@95 -- # TEST_TRANSPORT=tcp 00:17:17.369 12:09:24 -- compress/compress.sh@96 -- # NET_TYPE=virt 00:17:17.369 12:09:24 -- compress/compress.sh@96 -- # nvmftestinit 00:17:17.369 12:09:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:17.370 12:09:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.370 12:09:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:17.370 12:09:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:17.370 12:09:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:17.370 12:09:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.370 12:09:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:17.370 12:09:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.370 12:09:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:17.370 12:09:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:17.370 12:09:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:17.370 12:09:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:17.370 12:09:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:17.370 12:09:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:17.370 12:09:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.370 12:09:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.370 12:09:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.370 12:09:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:17.370 12:09:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.370 12:09:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.370 12:09:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.370 12:09:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.370 12:09:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.370 12:09:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.370 12:09:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.370 12:09:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.370 12:09:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:17.370 12:09:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:17.370 Cannot find device "nvmf_tgt_br" 00:17:17.370 12:09:24 -- nvmf/common.sh@154 -- # true 00:17:17.370 12:09:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.370 Cannot find device "nvmf_tgt_br2" 00:17:17.370 12:09:24 -- nvmf/common.sh@155 -- # true 00:17:17.370 12:09:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:17.370 12:09:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:17.370 Cannot find device "nvmf_tgt_br" 00:17:17.370 12:09:24 -- nvmf/common.sh@157 -- # true 00:17:17.370 12:09:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:17.370 Cannot find device "nvmf_tgt_br2" 00:17:17.370 12:09:24 -- nvmf/common.sh@158 -- # true 00:17:17.370 12:09:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:17.370 12:09:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:17.627 12:09:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.627 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.627 12:09:24 -- nvmf/common.sh@161 -- # true 00:17:17.628 12:09:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.628 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.628 12:09:24 -- nvmf/common.sh@162 -- # true 00:17:17.628 12:09:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.628 12:09:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.628 12:09:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.628 12:09:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.628 12:09:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.628 12:09:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.628 12:09:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.628 12:09:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.628 12:09:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.628 12:09:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:17.628 12:09:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:17.628 12:09:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:17.628 12:09:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:17.628 12:09:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.628 12:09:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.628 12:09:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.628 12:09:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:17.628 12:09:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:17.628 12:09:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.628 12:09:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.628 12:09:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:17.628 12:09:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:17.628 12:09:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:17.628 12:09:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:17.628 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.628 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:17:17.628 00:17:17.628 --- 10.0.0.2 ping statistics --- 00:17:17.628 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.628 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:17:17.628 12:09:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:29.826 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:29.826 00:17:29.826 --- 10.0.0.3 ping statistics --- 00:17:29.826 1 packets transmitted, 0 received, 100% packet loss, time 0ms 00:17:29.826 00:17:29.826 12:09:34 -- nvmf/common.sh@205 -- # trap - ERR 00:17:29.826 12:09:34 -- nvmf/common.sh@205 -- # print_backtrace 00:17:29.826 12:09:34 -- common/autotest_common.sh@1132 -- # [[ ehxBET =~ e ]] 00:17:29.826 12:09:34 -- common/autotest_common.sh@1134 -- # args=('compdev') 00:17:29.826 12:09:34 -- common/autotest_common.sh@1134 -- # local args 00:17:29.826 12:09:34 -- common/autotest_common.sh@1136 -- # xtrace_disable 00:17:29.826 12:09:34 -- common/autotest_common.sh@10 -- # set +x 00:17:29.826 ========== Backtrace start: ========== 00:17:29.826 00:17:29.826 in /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh:205 -> nvmf_veth_init([]) 00:17:29.826 ... 00:17:29.826 200 iptables -I INPUT 1 -i $NVMF_INITIATOR_INTERFACE -p tcp --dport $NVMF_PORT -j ACCEPT 00:17:29.826 201 iptables -A FORWARD -i $NVMF_BRIDGE -o $NVMF_BRIDGE -j ACCEPT 00:17:29.826 202 00:17:29.826 203 # Verify connectivity 00:17:29.826 204 ping -c 1 $NVMF_FIRST_TARGET_IP 00:17:29.826 => 205 ping -c 1 $NVMF_SECOND_TARGET_IP 00:17:29.826 206 "${NVMF_TARGET_NS_CMD[@]}" ping -c 1 $NVMF_INITIATOR_IP 00:17:29.826 207 00:17:29.826 208 NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.826 209 } 00:17:29.826 210 00:17:29.826 ... 00:17:29.826 in /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh:420 -> prepare_net_devs([]) 00:17:29.826 ... 00:17:29.826 415 echo "WARNING: No supported devices were found, fallback requested for $TEST_TRANSPORT test" 00:17:29.826 416 fi 00:17:29.826 417 00:17:29.826 418 # NET_TYPE == virt or phy-fallback 00:17:29.826 419 if [[ $TEST_TRANSPORT == tcp ]]; then 00:17:29.826 => 420 nvmf_veth_init 00:17:29.826 421 return 0 00:17:29.826 422 fi 00:17:29.826 423 00:17:29.826 424 echo "ERROR: virt and fallback setup is not supported for $TEST_TRANSPORT" 00:17:29.826 425 return 1 00:17:29.826 ... 00:17:29.826 in /var/jenkins/workspace/crypto-phy-autotest/spdk/test/nvmf/common.sh:436 -> nvmftestinit([]) 00:17:29.826 ... 00:17:29.826 431 return 1 00:17:29.826 432 fi 00:17:29.826 433 00:17:29.826 434 trap 'nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.826 435 00:17:29.826 => 436 prepare_net_devs 00:17:29.826 437 00:17:29.826 438 if [ "$TEST_MODE" == "iso" ]; then 00:17:29.826 439 $rootdir/scripts/setup.sh 00:17:29.826 440 fi 00:17:29.826 441 00:17:29.826 ... 00:17:29.826 in /var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh:96 -> main(["compdev"]) 00:17:29.826 ... 00:17:29.826 91 if [ $RUN_NIGHTLY -eq 1 ]; then 00:17:29.826 92 run_bdevperf 64 16384 30 00:17:29.826 93 00:17:29.826 94 # run perf on nvmf target w/compressed vols 00:17:29.826 95 export TEST_TRANSPORT=tcp 00:17:29.826 => 96 NET_TYPE=virt nvmftestinit 00:17:29.826 97 nvmfappstart -m 0x7 00:17:29.826 98 trap "nvmftestfini; error_cleanup; exit 1" SIGINT SIGTERM EXIT 00:17:29.826 99 00:17:29.826 100 # Create an NVMe-oF subsystem and add compress bdevs as namespaces 00:17:29.826 101 $rpc_py nvmf_create_transport -t $TEST_TRANSPORT -u 8192 00:17:29.826 ... 00:17:29.826 00:17:29.826 ========== Backtrace end ========== 00:17:29.826 12:09:34 -- common/autotest_common.sh@1173 -- # return 0 00:17:29.826 12:09:34 -- nvmf/common.sh@1 -- # nvmftestfini 00:17:29.826 12:09:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:29.826 12:09:34 -- nvmf/common.sh@116 -- # sync 00:17:29.826 12:09:34 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:29.826 12:09:34 -- nvmf/common.sh@119 -- # set +e 00:17:29.826 12:09:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:29.826 12:09:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:29.826 12:09:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:29.826 12:09:35 -- nvmf/common.sh@123 -- # set -e 00:17:29.826 12:09:35 -- nvmf/common.sh@124 -- # return 0 00:17:29.826 12:09:35 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:17:29.826 12:09:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:29.826 12:09:35 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:29.826 12:09:35 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:29.826 12:09:35 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:29.826 12:09:35 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:29.826 12:09:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:29.826 12:09:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:17:29.826 12:09:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:29.826 12:09:35 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:29.826 12:09:35 -- common/autotest_common.sh@1104 -- # trap - ERR 00:17:29.826 12:09:35 -- common/autotest_common.sh@1104 -- # print_backtrace 00:17:29.826 12:09:35 -- common/autotest_common.sh@1132 -- # [[ ehxBET =~ e ]] 00:17:29.826 12:09:35 -- common/autotest_common.sh@1134 -- # args=('compdev' '/var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh' 'compress_compdev' '/var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf') 00:17:29.826 12:09:35 -- common/autotest_common.sh@1134 -- # local args 00:17:29.826 12:09:35 -- common/autotest_common.sh@1136 -- # xtrace_disable 00:17:29.826 12:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:29.826 ========== Backtrace start: ========== 00:17:29.826 00:17:29.826 in /var/jenkins/workspace/crypto-phy-autotest/spdk/test/common/autotest_common.sh:1104 -> run_test(["compress_compdev"],["/var/jenkins/workspace/crypto-phy-autotest/spdk/test/compress/compress.sh"],["compdev"]) 00:17:29.826 ... 00:17:29.826 1099 timing_enter $test_name 00:17:29.827 1100 echo "************************************" 00:17:29.827 1101 echo "START TEST $test_name" 00:17:29.827 1102 echo "************************************" 00:17:29.827 1103 xtrace_restore 00:17:29.827 1104 time "$@" 00:17:29.827 1105 xtrace_disable 00:17:29.827 1106 echo "************************************" 00:17:29.827 1107 echo "END TEST $test_name" 00:17:29.827 1108 echo "************************************" 00:17:29.827 1109 timing_exit $test_name 00:17:29.827 ... 00:17:29.827 in /var/jenkins/workspace/crypto-phy-autotest/spdk/autotest.sh:351 -> main(["/var/jenkins/workspace/crypto-phy-autotest/autorun-spdk.conf"]) 00:17:29.827 ... 00:17:29.827 346 if [ $SPDK_TEST_VMD -eq 1 ]; then 00:17:29.827 347 run_test "vmd" $rootdir/test/vmd/vmd.sh 00:17:29.827 348 fi 00:17:29.827 349 00:17:29.827 350 if [ $SPDK_TEST_VBDEV_COMPRESS -eq 1 ]; then 00:17:29.827 => 351 run_test "compress_compdev" $rootdir/test/compress/compress.sh "compdev" 00:17:29.827 352 run_test "compress_isal" $rootdir/test/compress/compress.sh "isal" 00:17:29.827 353 fi 00:17:29.827 354 00:17:29.827 355 if [ $SPDK_TEST_OPAL -eq 1 ]; then 00:17:29.827 356 run_test "nvme_opal" $rootdir/test/nvme/nvme_opal.sh 00:17:29.827 ... 00:17:29.827 00:17:29.827 ========== Backtrace end ========== 00:17:29.827 12:09:35 -- common/autotest_common.sh@1173 -- # return 0 00:17:29.827 00:17:29.827 real 1m15.207s 00:17:29.827 user 2m17.718s 00:17:29.827 sys 0m6.130s 00:17:29.827 12:09:35 -- common/autotest_common.sh@1 -- # autotest_cleanup 00:17:29.827 12:09:35 -- common/autotest_common.sh@1371 -- # local autotest_es=1 00:17:29.827 12:09:35 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:17:29.827 12:09:35 -- common/autotest_common.sh@10 -- # set +x 00:17:42.100 INFO: APP EXITING 00:17:42.100 INFO: killing all VMs 00:17:42.100 INFO: killing vhost app 00:17:42.359 WARN: no vhost pid file found 00:17:42.359 INFO: EXIT DONE 00:17:46.551 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:17:46.551 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:17:46.551 Waiting for block devices as requested 00:17:46.552 0000:5e:00.0 (8086 0b60): vfio-pci -> nvme 00:17:46.552 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:17:46.552 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:17:46.811 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:17:46.811 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:17:46.811 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:17:46.811 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:17:47.069 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:17:47.070 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:17:47.070 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:17:47.330 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:17:47.330 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:17:51.519 0000:85:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:85:05.5 00:17:51.519 0000:ae:05.5 (8086 201d): Skipping not allowed VMD controller at 0000:ae:05.5 00:17:51.519 Cleaning 00:17:51.519 Removing: /var/run/dpdk/spdk0/config 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:17:51.519 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:17:51.519 Removing: /var/run/dpdk/spdk0/hugepage_info 00:17:51.519 Removing: /dev/shm/spdk_tgt_trace.pid1177037 00:17:51.519 Removing: /var/run/dpdk/spdk0 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1176370 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1177037 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1177601 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1179203 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1180398 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1180630 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1180863 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1181119 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1181362 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1181558 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1181752 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1182012 00:17:51.519 Removing: /var/run/dpdk/spdk_pid1182734 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1185129 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1185341 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1185585 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1185795 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1185849 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186044 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186228 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186426 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186614 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186810 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1186991 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1187228 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1187456 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1187728 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1187919 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1188120 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1188301 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1188494 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1188684 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1188881 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1189067 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1189260 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1189448 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1189647 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1189852 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1190119 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1190364 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1190578 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1190759 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1190957 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1191141 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1191334 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1191521 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1191721 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1191907 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1192100 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1192283 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1192490 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1192731 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1192993 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1193221 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1193423 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1193606 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1193809 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1193990 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1194185 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1194522 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1194739 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1195027 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1195298 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1195511 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1195859 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1196062 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1196418 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1196625 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1196983 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1197192 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1197504 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1197749 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1197953 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1198301 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1198379 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1198790 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1199096 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1199467 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1199611 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1203141 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1205060 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1206667 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1207535 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1208447 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1208814 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1208843 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1208866 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1212724 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1213201 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1214202 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1214404 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1218600 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1221896 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1225125 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1229248 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1233765 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1237641 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1243203 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1247685 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1252274 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1254725 00:17:51.520 Removing: /var/run/dpdk/spdk_pid1257181 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1260587 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1262669 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1265174 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1267883 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1270886 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1273303 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1275994 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1276362 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1276725 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1277094 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1277543 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1278186 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1278976 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1279293 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1280400 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1281508 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1282616 00:17:51.779 Removing: /var/run/dpdk/spdk_pid1283365 00:17:51.779 Clean 00:19:58.234 killing process with pid 1130171 00:19:58.234 killing process with pid 1130168 00:19:58.234 killing process with pid 1130170 00:19:58.234 killing process with pid 1130169 00:19:58.234 12:12:05 -- common/autotest_common.sh@1436 -- # return 1 00:19:58.234 12:12:05 -- common/autotest_common.sh@1 -- # : 00:19:58.234 12:12:05 -- common/autotest_common.sh@1 -- # exit 1 00:19:58.246 [Pipeline] } 00:19:58.266 [Pipeline] // stage 00:19:58.274 [Pipeline] } 00:19:58.292 [Pipeline] // timeout 00:19:58.299 [Pipeline] } 00:19:58.303 ERROR: script returned exit code 1 00:19:58.303 Setting overall build result to FAILURE 00:19:58.319 [Pipeline] // catchError 00:19:58.324 [Pipeline] } 00:19:58.339 [Pipeline] // wrap 00:19:58.345 [Pipeline] } 00:19:58.359 [Pipeline] // catchError 00:19:58.367 [Pipeline] stage 00:19:58.370 [Pipeline] { (Epilogue) 00:19:58.383 [Pipeline] catchError 00:19:58.385 [Pipeline] { 00:19:58.400 [Pipeline] echo 00:19:58.402 Cleanup processes 00:19:58.408 [Pipeline] sh 00:19:58.732 + sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:19:58.732 1130211 tee /var/jenkins/workspace/crypto-phy-autotest/spdk/../output/power/collect-cpu-load.pm.log 00:19:58.732 1315349 sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:19:58.758 [Pipeline] sh 00:19:59.041 ++ sudo pgrep -af /var/jenkins/workspace/crypto-phy-autotest/spdk 00:19:59.041 ++ grep -v 'sudo pgrep' 00:19:59.041 ++ awk '{print $1}' 00:19:59.041 + sudo kill -9 1130211 00:19:59.041 + true 00:19:59.054 [Pipeline] sh 00:19:59.335 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:20:03.529 [Pipeline] sh 00:20:03.807 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:20:03.807 Artifacts sizes are good 00:20:03.822 [Pipeline] archiveArtifacts 00:20:03.828 Archiving artifacts 00:20:03.961 [Pipeline] sh 00:20:04.242 + sudo chown -R sys_sgci /var/jenkins/workspace/crypto-phy-autotest 00:20:04.255 [Pipeline] cleanWs 00:20:04.264 [WS-CLEANUP] Deleting project workspace... 00:20:04.264 [WS-CLEANUP] Deferred wipeout is used... 00:20:04.270 [WS-CLEANUP] done 00:20:04.272 [Pipeline] } 00:20:04.292 [Pipeline] // catchError 00:20:04.301 [Pipeline] echo 00:20:04.302 Tests finished with errors. Please check the logs for more info. 00:20:04.305 [Pipeline] echo 00:20:04.307 Execution node will be rebooted. 00:20:04.322 [Pipeline] build 00:20:04.325 Scheduling project: reset-job 00:20:04.337 [Pipeline] sh 00:20:04.613 + logger -p user.info -t JENKINS-CI 00:20:04.622 [Pipeline] } 00:20:04.638 [Pipeline] // stage 00:20:04.645 [Pipeline] } 00:20:04.662 [Pipeline] // node 00:20:04.668 [Pipeline] End of Pipeline 00:20:04.702 Finished: FAILURE